Content uploaded by Ioannis Sarrigiannis
Author content
All content in this area was uploaded by Ioannis Sarrigiannis on Sep 30, 2019
Content may be subject to copyright.
1
Online VNF Lifecycle Management in a
MEC-enabled 5G IoT Architecture
Ioannis Sarrigiannis, Student Member, IEEE, Kostas Ramantas, Elli Kartsakli, Senior Member, IEEE,
Prodromos-Vasileios Mekikis, Angelos Antonopoulos, Senior Member, IEEE,
and Christos Verikoukis, Senior Member, IEEE
Abstract—The upcoming fifth generation (5G) of mobile com-
munications urge software defined networks (SDN) and network
function virtualization (NFV) to join forces with the multi-
access edge computing (MEC) cause. Thus, reduced latency
and increased capacity at the edge of the network can be
achieved, to satisfy the requirements of the internet of things
(IoT) ecosystem. If not properly orchestrated, the flexibility of
the virtual network functions (VNFs) incorporation, in terms of
deployment and lifecycle management, may cause serious issues
in the NFV scheme. As the service level agreements (SLAs)
of the 5G applications compete in an environment with traffic
variations and VNF placement options with diverse computing or
networking resources, an online placement approach is needed. In
this paper, we discuss the VNF lifecycle management challenges
that arise from such heterogeneous architecture, in terms of
VNF onboarding and scheduling. In particular, we enhance the
intelligence of the NFV orchestrator (NFVO) by providing i)
a latency-based embedding mechanism, where the VNFs are
initially allocated to the appropriate tier, and ii) an online
scheduling algorithm, where the VNFs are instantiated, scaled,
migrated and destroyed based on the actual traffic. Finally, we
design and implement a MEC-enabled 5G platform to evaluate
our proposed mechanisms in real-life scenarios. The experimental
results demonstrate that our proposed scheme maximizes the
number of served users in the system by taking advantage of the
online allocation of edge and core resources, without violating
the application SLAs.
Index Terms—5G, IoT, Live Migration, MEC, NFV, Scaling,
SDN, Testbed, VNF Orchestration, VNF Placement
I. INTRODUCTION
THE exponential increase in requests for a variety of ser-
vices creates the need for an omnipresent network, which
should be faster, more responsive and reliable, and easily
accessed under any conditions. According to recent reports,
by 2024 the mobile subscriptions will reach 8.3bn, a number
that exceeds the current worldwide population by 0.2bn [1].
Manuscript received August 1, 2019; revised September 10, 2019; accepted
September 15, 2019. This work has been supported in part by the research
projects SPOTLIGHT (722788), AGAUR (2017-SGR-891 and 2017-DI-068),
SPOT5G (TEC2017-87456-P) and SEMIoTICS (780315).
I. Sarrigiannis is with is with the Department of Signal Theory and Com-
munications (TSC), Polytechnic University of Catalonia (UPC), Barcelona,
Spain, and also with Iquadrat Informatica S.L., Barcelona, Spain (e-mail:
isarrigiannis@iquadrat.com).
K. Ramantas, E. Kartsakli and P-V. Mekikis are with Iquadrat Informatica
S.L., Barcelona, Spain, (e-mail: kramantas, ellik, vmekikis@iquadrat.com).
A. Antonopoulos and C. Verikoukis are with the Telecommunications Tech-
nological Centre of Catalonia (CTTC/CERCA), Spain, (e-mail: aantonopou-
los, cveri@cttc.es).
Copyright c
2019 IEEE. Personal use of this material is permitted. How-
ever, permission to use this material for any other purposes must be obtained
from the IEEE by sending a request to pubs-permissions@ieee.org
Approximately, 45% of this cellular traffic is expected to
be generated by an expanding ecosystem of smart connected
devices, known as the internet of things (IoT) paradigm. The
IoT growth is further accelerated by the penetration of IoT
applications and services in our everyday life, as well as in a
large segment of vertical industries, such as connected cars,
smart homes, smart metering and industry automation [2].
The wide range of IoT services calls for a disruptive,
highly efficient, scalable and flexible communication network,
able to cope with the increasing demands and number of
connected devices, as well as the diverse and stringent appli-
cation requirements. For instance, Ultra High Definition video
streaming or augmented reality applications have increased
bandwidth requirements, whereas autonomous driving, tactile
Internet and factory automation require low end-to-end (E2E)
latency, which in some cases should remain below 1ms. In this
context, the emerging fifth generation (5G) of wireless com-
munications, bringing together a set of enabling technologies,
will support and advance the potential of the IoT technology.
Software defined networking (SDN) is one of the key
enabling technologies that paved the road towards the 5G
revolution, by permitting the replacement of specific network
equipment, used in a dedicated way, with software that can be
executed in generic purpose hardware, enabling the separation
of the data and the control plane. Furthermore, the network
function virtualization (NFV) technology [3] enables the virtu-
alization of this networking software; hence, application and
network functionalities are handled as virtual network func-
tions (VNFs) and managed by an NFV orchestrator (NFVO),
able to have control upon the various locations of a distributed
system [4]. The flexibility offered by the SDN/NFV network
design is taken one step further with the network slicing
paradigm, which enables the creation of multiple logical
networks over a common physical infrastructure, offering
the necessary isolation to support multiple 5G services with
different requirements [5].
The virtualization of functions and the flexibility in their
placement is highly aligned with the concept of multi-access
edge computing (MEC), recently proposed by the European
Telecommunication Standards Institute (ETSI) [6]. MEC tech-
nology is defined as the cloud computing capabilities offered
at the edge of the network, at the end users’ proximity.
MEC is responsible for delivering computing, storage and
networking resources to the end user, thus achieving significant
reductions in service response times and increasing reliability
and security, since services are located much closer to the
2
users, instead of a remote cloud. IoT is widely considered
one of the key use cases of the MEC technology [6], [7].
First, a wide range of IoT services can be deployed to
the edge, including IoT data aggregation services, big data
analytics, video streaming transcoders, etc., ensuring low-
latency and ultra-reliable performance. Second, IoT devices,
which often have limited computational and storage resources
(e.g., sensors, smart meters, etc.), can significantly enhance
their capabilities by offloading tasks and services to the edge
[8]–[15].
Even though MEC is clearly one of the major players
towards the 5G realization and the future IoT services, the
technology is still at its infancy [16], and challenges [17], such
as efficient deployment, resource allocation and optimization,
and application lifecycle management, arise. Only recently, the
ETSI MEC group released the initial phase 2 specifications
that deal with the architecture, framework and general princi-
ples for service Application Programming Interfaces (APIs),
but no agreement on the standardization has been yet achieved
[18]. Another non-trivial issue refers to the scheduling and
placement of VNFs over the underlying infrastructure, includ-
ing different MEC and remote cloud locations. In [19], cloud
service deployment is modeled as a graph embedding problem,
where service VNF Forwarding Graphs (VNFFGs), or VNF
chains, are embedded on top of a network of hypervisors
(or compute nodes). This is an NP-complete problem, and,
hence, it is very time consuming to find an optimal solution
even for small networks. However, reaction times in modern
cloud-native infrastructures have gradually shortened to the
point where services are individually scaled-out and scaled-in,
responding to user demand in a matter of seconds. In order
to keep up with the challenging cloud-native environments,
service orchestration and scale operations must be orchestrated
in real-time, with minimal computational complexity.
Having a closer look in the state of the art works regarding
the VNF placement problem, its study can be performed
through studying the placement and management of virtual
machines (VMs) [20]. Furthermore, we can find two different
approaches in the literature: i) offline, where the placement
decision is taken in order to satisfy end-users’ requests, under
various constrains, and ii) online, where, in addition to the
initial placement decision, real traffic data, i.e., load, are
utilized to trigger possible VM reallocation events [21]. On
the one hand, concerning the offline works, authors in [22]
present an advanced predictive placement algorithm where the
optimal placement location is defined by the least used location
that is closest to the majority of the user equipment (UEs). In
[23], a mathematical optimization model for VNF placement
and provisioning is proposed, guaranteeing the quality of
service (QoS) by including latency into the VNF chaining
constrains. They focus, however, only on the placement of the
virtualized LTE core functions, omitting the management and
orchestration of the cloud applications and services that are
co-hosted in the same infrastructure. Authors in [24] study the
dynamic deployment of network services (chains) on different
VMs and formulate their reallocation of VNFs as a mixed
integer program, focusing on server power consumption. The
migration solution provided in this work, though, is applicable
only in networks with repetitive, over a specific time interval,
traffic scheme. One basic limitation of the aforementioned
works is that the performance of the proposed algorithms is
assessed only through simulations.
On the other hand, regarding the online solutions, in [25] the
authors study how to deploy and scale VNF chains on the fly,
using VNF replication across geo-distributed datacenters for
operational cost minimization. Nevertheless, they limit each
VNF chain deployment and scaling within the same datacen-
ter. Furthermore, a traffic forecasting method for placing or
scaling the VNF instances to minimize the inter-rack traffic is
presented in [26], in the premises of a cloud datacenter. Even
though a real implementation is offered, along with operator
traffic driven simulations, the placement method in this work
does not take into consideration the different requirements
each VNF might have. Finally, both works are limited within
the premises of a datacenter, failing to exploit the potential
benefits offered by edge-cloud architectures.
On a different note, there is a very limited amount of
experimental works in the literature that tackles with MEC im-
plementations, where new challenges arise in order to deploy
and orchestrate a programmable and flexible MEC-enabled
5G testbed. For instance, a 5G-aware proof of concept of an
evaluation testbed with MEC capabilities has been described
in detail in [27], without conducting though any real experi-
ments to provide results. Furthermore, [28], [29] are based on
containers, another virtualization technology, which share the
host operating system and provide process level isolation only,
whereas their orchestrator has limited capabilities without the
ability to support migration features. Finally, these works are
limited to the technical implementation of the testbeds and
do not tackle the VNF placement problem. To the best of
our knowledge, there is no related work that: i) combines
the interplay of the MEC with the cloud, in a virtualized
manner, ii) proposes and implements an online VNF placement
algorithm, iii) exploits VNF migration and scaling capabilities
to meet the service demands in real-time, and iv) provides
experimental results over a real 5G testbed implementation.
In order to efficiently manage the NFV ecosystem, there is
the need for online and agile techniques for scheduling and
orchestration of VNFs, as well as real environments that this
technology can be applied to, in order to provide transparent
and diligent testing and assessment. In [30], we presented a
MEC-enabled 5G architecture, distributing the computational
and network resources to the edge and core tiers. In this
paper, we take a significant step further by implementing
this architecture in a real testbed environment, utilizing VM
technology. Furthermore, we propose two novel algorithms for
the joint orchestration of the MEC and Cloud resources, thus
enhancing the NFVO capabilities. Specifically, we first present
an algorithm for the VNFFG embedding of virtualized chained
services, taking into account their latency requirements and
service priorities (e.g., based on their criticality). Then, we
propose another algorithm for the real-time allocation of the
VNFs to the MEC and cloud resources, leveraging real-time
service scale-out and scale-in features to meet the user service
requests. Additionally, the second algorithm supports live ser-
vice migration to further enhance the initial service placement,
3
Fig. 1. MEC-enabled 5G IoT Architecture
in order to handle efficiently the latency critical applications.
Finally, we proceed to the validation of our proposed algorithm
in a MEC-enabled 5G testbed implementation, deployed using
open-source software over generic purpose hardware. The ob-
tained experimental results, based on real-world 5G scenarios
and cloud applications, provide useful insights for the potential
of MEC-enabled architectures for real-life applications.
The remainder of this paper is organized as follows. Section
II presents the overall NFV-enabled architecture, along with
some key concepts and the considered system model. Section
III provides the proposed orchestration algorithms for VNF on-
boarding and scheduling. Section IV discusses the 5G testbed
implementation and the employed open-source tools for its re-
alization. Section V delivers the obtained experimental results,
thoroughly explaining the different experimental scenarios,
whereas section VI is devoted to the paper’s conclusions.
II. NFV ARCHITECTURE
We consider a MEC-enabled 5G IoT architecture depicted
in Fig. 1. A heterogeneous radio access network (RAN)
topology is considered for the connection of the IoT devices
that may employ different wireless technologies. In particular,
we consider a network that includes standalone 5G base
stations (gNBs), IoT access points (APs) and a cloud RAN
deployment, where base band units (BBUs) are connected
with remote radio head (RRH) units. This architecture fully
supports NFV by enabling the virtualization of compute and
network resources at the MEC and cloud hypervisors, located
at the edge and core tier respectively. The virtualized infras-
tructure manager (VIM) is responsible for the management
and control of the compute, storage and network resources of
the NFV infrastructure (NFVI), while the NFVO performs the
compute and network resource orchestration.
A. Edge Computing
The considered architecture includes two tiers of computa-
tional resources: the cloud at the core tier and the MEC at the
edge tier. These are in the form of hypervisors (or compute
nodes) where application and network VNFs are hosted for
the duration of their lifecycle. Hypervisors are interconnected
with an SDN data-plane, forming a leaf-spine topology, i.e.,
a mesh with a constant number of hops. Although different
topologies have been considered in the literature, the leaf-spine
is standardized in modern data centers as it simplifies VNF
scheduling and guarantees a fixed latency for the data plane.
It must be noted that the edge tier, i.e., the MEC hosts, contains
limited computing resources. These are typically allocated to
VNFs that should be placed closer to the UE-side to satisfy
specific service requirements (typically low latency).
4
Fig. 2. VNF chaining for face recognition
B. Virtual Network Functions
Fully adopting the NFV paradigm, we consider that the 5G
cloud applications are implemented in the form of VNFFGs
that result in VNF chaining. Each virtual link has its own
bandwidth and latency requirements, which are typically en-
coded in the VNF descriptor (VNFD) file, along with other
VNF metadata. During VNF placement, network slicing can
be employed to guarantee the networking requirements of the
VNFs. Network slicing ensures service isolation and offers
performance guarantees to the service tenants by reserving
appropriate resources as denoted by the VNFD. Network
slicing in 5G networks is supported by the programmable
infrastructure, via appropriate northbound APIs. However,
dedicated slices bear a significant cost for service providers,
as resources are reserved even when they are not used by
clients, hence negating any potential statistical multiplexing
gains. Therefore, dedicated slices are typically associated with
services with high QoS requirements.
Furthermore, based on their delay constrains, VNFs can
be classified as latency-critical VNFs (LCV N F s), which are
sensitive to latency, and latency-tolerant VNFs (LT V N F s)
that can tolerate a higher degree of delay. Accordingly, the 5G
cloud applications can be classified intro three categories: i)
real-time applications, consisting of high priority LCV N F s
(H P LCV N F s), ii) near real-time applications, consisting
of low priority LCV N F s (LP LCV N F s), and iii) non
real-time applications that consist of LT V NF s. The VNF
chaining feature, though, allows us to combine and connect the
aforementioned VNFs. In general, due to its limited resources
compared with the cloud, the MEC entity is usually reserved
for LCV N F s, which are placed in proximity to the UEs,
in order to minimize latency. On the other hand, LT V N F s,
or even LP LCV N F s in specific situations, can be safely
deployed to the cloud.
An example of VNF chaining is given in Fig. 2. A set
of VNFs is chained both in the same and in separated
hypervisors, in order to identify a person at the entrance of
a company. Although the face recognition application [31] is
broadly known through the cloudlets edge computing concept,
it is a use case that is also compatible with MEC [32]. To
achieve faster response time for the employees of the company,
a MEC node is deployed to the edge that hosts two chained
services: i) a face recognition VNF, and ii) a database (DB)
VNF. If the person is identified in the employee DB, the whole
process is finalized. Otherwise, VNF #1 sends its output to the
face recognition VNF in the cloud, i.e., VNF #3, where the
same procedure occurs with a general DB (VNF #4) including
employees of the company from other locations, or customers.
C. VNF lifecycle
Each individual VNF has a lifecycle, which is controlled and
managed by the NFVO. The NFVO resides in the core tier and
can be considered as the central controller of the system, in
terms of filtering the incoming requests and (re)allocating the
compute and network resources. It executes periodic checks
in order to monitor the current availability of compute and
network resources, and ensures that the NFVI adapts to
traffic variations. Overall, the VNF lifecycle consists of the
following:
•Day-0 configuration that includes VNF onboarding and
resource allocation along with network service configu-
ration.
•Scale-out, where horizontal scale-out involves creating
more instances of a given VNF, for load balancing
purposes and is typically triggered when the allocated
CPU, memory or network resource utilization is increased
upon increased traffic.
•Scale-in, which is the opposite process of scale-out and
is triggered when a VNF is underutilized.
•Live migration that involves moving a VNF to a different
hypervisor for optimization purposes, without service
interruption [21]. It includes running both instances (in
the old and new hypervisor) in parallel, while service mi-
gration is performed, and only migrating RAM contents
as a final step.
D. System model
In our system model, Fig. 3, we focus on the VNF
placement and resource allocation between the edge and the
core tiers. At the core tier, there are Mcloud hypervisors
(Cloud{M}), with maximum capacity HC loudM ax {M}and
current utilization HCloud{M}per hypervisor. Respectively,
at the edge tier there are NMEC hypervisors (MEC {N}),
with maximum capacity HMECM ax {N}and current uti-
lization HMEC {N}per hypervisor. In this work, we focus
on the interconnection of each MEC hypervisor, in a leaf-
spine topology with the cloud hypervisors. There are incom-
ing VNFs in the system, some of which could be chained.
For each V NFi{T ype, Resources, H ypervisor}we define
a type, i.e., H P LCV NF ,LP LC V N F or LT V N F , the
required resources, that cannot exceed the hypervisor with
the maximum capacity, and the hypervisor where it can be
deployed. With respect to the service onboarding, we consider
the following setup: i) the real-time applications, hosted in HP
LCV N F s, are deployed to the edge, ii) the near real-time
applications, hosted in LP LC V N F s can be either allocated
to the edge or core tiers, and iii) the non-real time applications,
hosted in LT V NF s, are deployed to the core tier.
The scaling functionality of our system is being triggered
based on the incoming requests per VNF, as multiple users can
request data from the same VNF, resulting in increased VNF
5
Fig. 3. System model
load. More specifically, we define the: i) scale-out threshold,
which defines the value of the CPU utilization, above of
which a new VNF of the same type is instantiated, ii) scale-
in threshold, which defines the value of the CPU utilization,
below of which the last created VNF is deleted, and iii)
cooldown period, which is the predefined time interval that
should pass before a consecutive scaling event at the same
VNF may occur. Finally, the live migration functionality can
be triggered upon a scale-in or scale-out event and involves
only the shifting of the LP LCV NF s.
III. VNF ORCHESTRATION ALGORITHMS
In this section, we discuss the role of the NFVO in the
VNF lifecycle management, as well as the actual orchestration
algorithms. In order to keep up in with the challenging cloud-
native environments, where sub-second reaction times are
sometimes required, fast online algorithms are proposed. More
specifically, the VNF scheduling problem is split in three
phases, which are centrally controlled by the NFVO:
•The VNFFG embedding phase is executed once during
service initialization and onboarding, to allocate VNFs
to the MEC or cloud hypervisors, based on delay con-
straints.
•Service scale-out is performed periodically, based on a
user-defined cooldown period, and triggers a scheduling
operation for all scaled-out VNFs. A fast online algorithm
is devised to handle this operation, while a live migration
Algorithm 1 VNFFG Embedding
1: The VNFFG embedding process starts from the services
with the highest QoS. We traverse all VNFs in the VNFFG
breadth-first, starting from the entry point where the UE
connects.
2: If the latency constraints of the VNF links exceed the
round-trip time to the cloud, the VNF is assigned to the
MEC. Otherwise, it is assigned to the cloud.
3: If the MEC resources are exhausted, further deployment of
H P LC V NF s is blocked, as well as their chained VNFs,
unless they can tolerate the increased latency associated
with the core tier (LP LCV NF s).
step might be performed in cases of insufficient edge
resources.
•Service scale-in is also a periodic process, which erases
VNF instances when the user demand decreases, to free
up resources when they are not needed. We propose a live
service migration step to be performed after the scale-in
operation to further optimize the VNF placement.
VNF scheduling is based on a cost function, which takes
into account the hypervisor resources consumed by the VNF,
i.e., CPU, memory and disk size, as well as bandwidth costs
to interconnect the VNFs in the VNFFG. In general, the
minimum cost is achieved when all VNFs of the same VNFFG
are placed at the same hypervisor. It gradually increases as
VNFs are placed at different hypervisors occupying network
links for communication, while MEC hypervisors are generally
assigned a higher cost than cloud hypervisors.
A. VNFFG Embedding
Although many different topologies have been considered
in the literature for the core and edge tiers, in this work
we consider a standard leaf-spine topology. This topology
simplifies the VNFFG routing over the physical infrastructure,
as all hypervisors in the core and edge tier are interconnected
in a mesh with a fixed number of hops (Fig. 3). The VNFFG
embedding is performed during the service bootstrapping
phase at the NFVO that assigns VNFs to the core or edge
tier based on their delay constraints. The edge tier hosts have
a higher operational expenditure (OPEX) than the core tier
hypervisors and hence a higher deployment cost which is
reflected on the cost function. Thus, typically only a limited
number of edge VNFs is deployed to the edge. Alg. 1 explains
the basic steps of the VNFFG embedding process.
B. Online VNF scheduling
VNF scheduling is an online problem, as VNFs are typ-
ically scaled-out and scaled-in within very fast time-frames,
in the order of seconds, based on current traffic. Although
many works in the literature solve an offline version of the
problem, where the total number of VNFs is known during
service bootstrapping, and do not take into consideration the
real traffic of the VNFs, this assumption is not valid in
modern cloud infrastructures. In this work, we assume that
only the VNF assignment to the core or edge tier has been
6
Algorithm 2 Online VNF scale-out/scale-in and dynamic live-
migration scheduling.
Input: HMECM ax {N},HCloudMax {M},
HMEC {N},HCloud{M},
V NFi{T ype, Resources, H ypervisor }
Triggering Event e, where ein {scale −in, scale −out}
V NFe
Output: Hypervisor for VNF placement
1: if e=scale −out of V N Fethen
2: if V NFe{3}=M EC{e}then
3: repeat
4: if V NFe{2} ≤ M EC{e}then
5: allocate V NFeon M EC{e}
6: update MEC {e}resources
7: else if V NFe{1}=LP LCV N F AND
V NFe{2} ≤ max(HCloud)then
8: allocate V NFeon max(HCloud)& flag
it
9: update max(HCloud)
10: else if V NFi{1}=LP LCV N F exists on
MEC {e}then
11: if max(V NFi{2})≤max(HCloud)
then
12: live migrate V NFito max(HCloud)
& flag it
13: update HMEC {e}
14: update max(HCloud)
15: end if
16: else
17: reject scale-out request
18: exit algorithm
19: end if
20: until V NFeis allocated
21: else if V NFe{3}=Cloud{e}then
22: if V NFe{2} ≤ HCloud{e}then
23: allocate V NFeon Cloud{e}
24: update HCloud{e}resources
25: else if V NFe{2} ≤ max(HCloud)then
26: allocate V NF {e}on max(H Cloud)
27: update max(HCloud)resources
28: else
29: reject scale-out request
30: exit algorithm
31: end if
32: end if
33: else if e=scale −in of V N Fethen
34: release V NFe{2}
35: update hypervisoreresources
36: if V NFe{3}=M EC{e}then
37: while flagged LP LCV N F exist on Cloud AND
HMEC {e} ≥ flagged V N Fi{2}do
38: live migrate flagged V N Fito HMEC {e}
39: update HMEC {e},HCloud
40: end while
41: end if
42: end if
TABLE I
TRIGGERING EVENTS AND ACTIONS
Action #1 Action #2 Action #3
Scale-out Find VNF placement hypervi-
sor without SLA violation. Per-
form migrations, if necessary.
Allocate
hypervisor
resources
Instantiate
VNF
Scale-in Terminate VNF Release
hypervisor
resources
Migrate
flagged
VNFs
Fig. 4. Flow chart for scale-out triggering event of LCVNF on MEC
completed during the service bootstrapping phase, hence the
online scheduling algorithm only needs to assign the VNF
to the actual cloud or MEC hypervisor. In what follows,
VNFs are placed in hypervisors with sufficient compute,
memory and networking resources. This algorithm tries to first
accommodate the highest cost VNFs, starting from the hosts
with the highest available resources. The main algorithmic
steps of the proposed Alg. 2 for scheduling scaled-out VNFs
are explained as follows and they are generally performed
after a predefined cooldown period has elapsed. Furthermore,
the algorithm tries to accommodate higher priority VNFs via
live migration actions of lower priority VNFs, while it tries
to restore the balance of the system after a scale-in process.
Please note that our algorithm can be executed in combination
with any NFVO that supports scaling capabilities and VIM
with live migration support.
In more detail, regarding the scale-out operation, we try to
place the new VNF at the same hypervisor with the original
VNF that is being scaled, in order to eliminate inter-hypervisor
delays. Table I depicts the actions that are being performed,
7
Fig. 5. MEC-enabled 5G testbed
depending on the triggering event. As Fig. 4 demonstrates, in
case the original VNF resides in a MEC hypervisor and there
are available resources, the new VNF is allocated to the same
hypervisor as well. In case of insufficient MEC resources, in
the event of: i) LP LCV N F type, it can be directly allocated
to the cloud hypervisor with the most free resources, and is
being flagged, ii) HP LCV N F type, a live migration of
existing LP LCV N F s takes place to the cloud hypervisor
with the maximum available resources, starting with the VNF
that occupies the most resources, in order to free up MEC
resources for the incoming VNF, along with updating the
bookkeeping of the migrated VNFs (flagging), or iii) no LP
LCV N F type exists at the MEC hypervisor, the scale-out
request gets rejected. Conversely, on the occasion of a scale-
in triggering event, the resources of its hypervisor are released.
In case of a MEC hypervisor, we migrate back possible flagged
LP LCV NF s, according to our bookkeeping.
Overall, the runtime complexity of the proposed algorithm
is O(n2), as it is determined by the most important term, i.e.,
the max(), which is nested in one loop:
•Scan the VNF array to find LP LCV NF s at the MEC
hypervisor −→ O(n)
•Calculate the maximum value of the array HC loud −→
O(n), nested in one loop −→ O(n2).
In terms of runtime memory, we need:
•four one-dimensional arrays to store the maximum and
current capacities of the MEC and cloud hypervisors.
Specifically, two arrays of size Nfor the HM ECM ax
and HMEC values, and two arrays of size Mfor the
HCloudM ax and HCloud parameters will be allocated
to the runtime.
•one two-dimensional dynamic array with size i×3, to
store the T ype,Resources and H ypervisor data for
each V NFi.
Regarding the execution of Alg. 2, assuming the presence of
iVNFs in the system, the maximum number of iterations can
be calculated for the worst-case scenarios. Specifically, for the
first loop of the algorithm (steps 3-20), the maximum number
of iterations is (i−1). This occurs when a HP LCV NF
needs to be scaled-out and all the remaining VNFs at the MEC
are LP LCV NF s and must be all be migrated to the cloud
to release sufficient resources for the high priority function.
Conversely, the maximum number of iterations for the second
loop of Alg. 2 (steps 37-40) is also (i−1), under the occasion
that the aforementioned LP LC V N F s return to their original
hypervisor after the scale-in process of the H P LC V N F .
IV. TESTBED IMPLEMENTATION
In order to demonstrate the potential of the described
architecture, we introduce a real implementation of a MEC-
enabled 5G testbed, depicted in Fig. 5. The hardware of the
testbed, as seen in Table II, consists of five physical servers,
where the functionalities of the core tier (e.g., the cloud and the
NFVO) and the edge tier (e.g., the MEC) are deployed to, as
well as another physical server that enables the management,
in terms of infrastructure virtualization. In terms of compute
resources, the physical server at the edge site has significantly
lower computational power compared to the servers at the core.
In terms of networking, the physical servers are connected to
two routers through 1 Gbps Ethernet interfaces.
With respect to the software installation, Openstack [33],
on its Queens release, is the open-source Infrastructure-as-a-
8
TABLE II
HARDWARE CHARACTERISTICS
Controller Node Cloud Compute Node x2 MEC Compute Node OSM
CPU Intel i5-8500 i5-8500 i5-7400 i5-8400
Cores 6 6 4 6
RAM 32GB 16GB 8GB 8GB
Storage SAMSUNG 960 EVO 250GB SAMSUNG 960 EVO 250GB WD M.2 2280 120GB WD M.2 2280 120GB
SAMSUNG 860 EVO 250GB
Network Interfaces 2x1Gbps Ethernet 2x1Gbps Ethernet 2x1Gbps Ethernet 2x1Gbps Ethernet
Service platform that is employed as the VIM, in order to
deploy and control the VMs that will host the VNFs. The
Openstack Controller Node, deployed to one physical server as
shown in Fig. 5, hosts the compute and network management
components for the virtualization and management of the
infrastructure, while the Compute Nodes (or hypervisors),
deployed to three physical servers, provide a pool of physical
resources, where the VMs are being executed. Openstack is
based on services, and in order to provide the needed isolation
and management, they are deployed to LXD containers. For
instance, the Nova service, part of the Openstack Compute
Services that reside in all Compute Nodes, is responsible
for spawning, scheduling and decommissioning the VMs on
demand, while the Neutron service, which resides in all four
nodes, is responsible for enabling the networking connectivity.
Additionally, the Openstack Telemetry service (based on the
Ceilometer service) is deployed to collect monitoring data,
including system and network resource utilization, based on
which further actions are taken. All nodes need two network
interfaces, namely, the management, i.e., control plane, for
the communication among the Openstack services and the
NFVO, and the provider network, i.e., data plane, for the
communication among the VMs, while each application has
its own virtual tenant network.
Moreover, it is worth noting that Openstack supports two
important features, namely the horizontal scaling, i.e., the
expansion of the physical resources, simply by adding new
physical servers where the Compute Node services are de-
ployed to, and the live migration of the VMs. The migration
is classified as live due to the fact that after a VM migration is
complete, the VM status resumes exactly from the same state
it was before the migration, without service interruption. The
duration of the live migration might take from few seconds to
several minutes, depending on various factors, including, but
TABLE III
EXPERIMENTAL SETUP
Parameter Value Parameter Value
HM E CM ax{1}3 vCPUs HC loudM ax {1}6 vCPUs
HP LCVNF max
latency
100ms LP LCVNF max
latency
200ms
LTVNF max la-
tency
Irrelevant Resources per
VNF
1 vCPU
Hypervisor HP
LCVNF
MEC Hypervisor LP
LCVNF
MEC/Cloud
Hypervisor
LTVNF
Cloud Cooldown
Period
180s
Scale-out 90% CPU
utilization
Scale-in 30% CPU
utilization
not limited to: i) the virtualization platform, ii) the underlying
hardware, iii) the type of the hypervisor, iv) the type of storage,
v) the footprint of the VMs in terms of vCPUs, RAM and
Storage, vi) the current network load, and vii) the current VM
load. Without any doubt, there should be an upper limit for
the duration of the live migration, in order for the system to be
agile and adaptive to the real-time traffic changes, but this is
related to the actual system and the limitations that are being
imposed by the hardware, the architecture decisions and the
virtualization platform.
The NFVO, which is responsible for the computing and
network resource orchestration and management, is deployed
as an independent entity to the fifth server, at the core tier,
in alliance with the ETSI NFV information models, and
is based on the Open Source MANO (OSM) [34], on its
sixth release. Although there is a variety of NFVOs in the
literature [16], its low hardware requirements, combined with
the capabilities it offers, made OSM the most suitable NFVO
for our system. OSM supports descriptor files written in yaml,
namely the VNFD and the network service descriptor (NSD).
The former defines the needed VNF resources in terms of
compute resources and logical network connection points, the
image that will be launched at the VM, as well as the auto-
scale thresholds (e.g., scale-in, scale-out and cooldown period,
minimum or maximum number of VNFs) based on the metrics
that are being collected from the Telemetry service of the VIM.
The latter is responsible for the connection point links, using
virtual links, among the interconnected VNFs, mapping them
to the physical networks provided by the VIM.
Neither Openstack nor OSM are aware of the type of
service that is being executed at the VNF. Furthermore, OSM
is not aware of the hypervisor where the VM that hosts
Fig. 6. Scale-out process to accommodate increased incoming traffic
9
Fig. 7. VNF Initial Placement (left) and placement after auto-scale functionality (right)
the VNF is being placed and leaves the VM placement to
Openstack. This lack of placement control deprives OSM of
controlling the migration feature. In order to gain control of
this feature, we implemented this functionality with a bash
script. Openstack supports four different placement methods
via the compute schedulers; filter scheduling, based on filters
and weights, chance scheduling that randomly selects the
compute filters, utilization aware scheduling, based on actual
resource utilization and availability zones scheduling where
the compute nodes are divided into zones. None of the above
options, though, take into account the actual service running
at the VM or how to allocate LCV N F s or LT V N F s among
the hypervisors. To that end, we devised the aforementioned
bash script, which is based on our two proposed algorithms,
and performs the onboarding, scale-out/in and live migration
actions of the VNFs to the appropriate hypervisors.
V. EXPERIMENTAL RESULTS
In order to demonstrate the potential of the described
architecture, we conducted a set of experiments, leveraging
the MEC-enabled 5G testbed, as described in section IV. In
the following, we first provide an experimental setup and, then,
we evaluate the performance of the proposed algorithms.
A. Experimental setup
In our testbed setup, we assume one MEC and one cloud
hypervisor. For our experiments, we define the max latency
as: i) 100ms for the H P LCV N F , and ii) 200ms for the
LP LCV N F . For the LT V N F , the latency is irrelevant
as the transmission is asynchronous. Note that the latency is
measured as the E2E delay between the UE and the hypervisor,
also corresponding to the response time of the application.
The scale-out threshold is set at 90% CPU utilization, the
scale-in at 30% and the cooldown period at 180s. Since we
assume exponential service time of the LCV N F service, as
soon as the CPU utilization exceeds the predefined threshold,
the response time violates the service level agreement (SLA),
so the scale-out process will take place prior to this violation.
Please note that the aforementioned values are fully customiz-
able, depending on the actual requirements of the diverse 5G
applications. Table III depicts the experimental setup in detail.
Finally, the following three experiments run multiple times,
separately, in a duration of 24 hours each. Since most of the
parameters were deterministic, the results were stable, with a
variation of 2ms.
B. Autoscaling experiment
In the first experiment, Fig. 6, we demonstrate the scale-
out process. We start with one H P LCV N F and, as the
traffic increases, the CPU utilization of the VNF increases
accordingly. When it reaches the CPU utilization threshold at
90%, it is scaled-out and a second HP LCV NF is being
instantiated. In order to equally distribute the traffic between
the two VNFs, we deploy a load balancer VM with a round
robin balancing policy. Hence, each VNF has approximately
45% CPU utilization when the new VNF is instantiated.
While the traffic is further increased, another scale-out event
is triggered and a third HP LCV NF is instantiated, with
the load balancer distributing the incoming requests to three
VNFs. This results in a 60% CPU utilization by the time the
10
Fig. 8. Response time over traffic for the different deployment scenarios
third VNF is instantiated. The measured scale-out duration
for a VM with 1 vCPU, 1GB of RAM and 5GB of storage,
from the moment of the initial command to the VIM until the
instantiation process was complete, was 15 seconds.
As we increase the traffic over time, we observe that the
angle that is formed between the x axis and the graph on
Fig. 6 is reduced, according to the number of VNFs in the
system that serve the requests. This is expected, as the traffic
is equally distributed to two or three VNFs, while the traffic
rate is increasing in a steady pace. With the autoscaling feature,
we can accommodate more requests, compared with the legacy
monolithic deployments that do not support such feature.
C. Embedding algorithm & placement experiment
In the second experiment, illustrated in Fig. 7, we demon-
strate the various placement locations, validating that the
algorithm for the onboarding process, i.e., Alg. 1, provides
the optimal placement result for maximizing the served re-
quests. More specifically, we assume two chained VNFs, one
HP LCV NF and one LT V NF , and we investigate three
possible VNF placement methods: i) all VNFs deployed to
the cloud (Fig. 7-a), ii) the H P LCV NF s deployed to the
MEC and the LT V NF s to the cloud (Fig. 7-b), and iii)
all VNFs deployed to the MEC (Fig. 7-c). We reject the
first solution as the SLA is being violated, because the HP
LCV N F cannot tolerate the increased latency imposed by
the MEC-cloud link. According to the embedding algorithm,
the initial placement is performed based on latency constrains,
i.e., the H P LC V NF s are allocated to the edge tier, while
the LT V NF s are allocated to the core tier. After the ini-
tial placement, the HP LCV NF is hosted in the MEC
(V NF1{HP, 1, M EC}), while the LT V N F is hosted in
the cloud (V NF2{LT V NF , 1, Cloud}). This is the optimal
placement solution as, in case of increased traffic, the HP
LCV N F can scale-out twice, until the MEC resources are
depleted (HMEC = 0), and serve more requests (Fig. 7-
e). Finally, in the third deployment method, where everything
is deployed to the MEC, the HP LC V N F can scale-out
only once (Fig. 7-f) and the MEC resources are depleted
(HMEC = 0), since there is one LT V N F deployed to the
MEC (V NF2{LT V NF , 1, MEC }) that occupies 1 vCPU.
Fig. 9. Live migration to accommodate more HP LCVNF at the edge
In Fig. 8, the response time of the VNFs versus the traffic
is depicted, depending on the hypervisor that the VNFs are
placed. From this figure, we can observe that if all the VNFs
are deployed to the cloud, no further investigation is performed
as this deployment method violates the SLA (over 100ms) for
the H P LC V N F . For the MEC-cloud placement method,
i.e., VNFs are placed between the MEC and the cloud, the
system will be able to support up to three H P LC V N F s at
the MEC in order to serve up to 270 requests/second without
violation of the SLA. Finally, while the third deployment
method has improved response time due to the elimination
of the link for the communication of the HP LC V N F with
the LT V NF (they are hosted in the same hypervisor), the
total requests/second that can serve are limited up to 180, due
to the fact that the MEC resources quota has been reached.
D. Online VNF scheduling experiment
In the third experiment, depicted in Fig. 9, we demonstrate
how the live migration feature can be employed to support
more requests when LCV N F s with different priorities are
competing for the same MEC resources, without interrupt-
ing the availability of the near real-time application. In this
scenario, we take advantage of the live migration feature,
described in Alg.2. Initially, the embedding algorithm allocates
both VNFs to the edge tier (Fig. 9-a) (V NF1{H P, 1, ME C},
11
Fig. 10. Response time pre and post migration
V NF2{LP, 1, M EC }). While the requests for the V NF1are
increasing, the CPU utilization increases as well, resulting
in the scale-out of V NF1(V N F3{H P, 1, ME C}). When
a second scale-out (V NF4{HP, 1, M EC}) takes place, the
MEC resources have been depleted (HM E C = 0), triggering
the scheduling algorithm to: i) live migrate the LP LCV NF
to the cloud (V NF2{LP, 1, C loud}), as depicted in 9-b, and
ii) place the scaled-out HP LCV N F (V N F4) at the MEC
(Fig. 9-c). When the traffic at the HP LCV NF is decreased, a
scale-in (termination of V NF4) occurs and the LP LC V N F
(V NF2) is migrated back to its original hypervisor (Fig. 9-d).
In Fig. 10, we evaluate the response time versus the time in
minutes. The requests for the HP LCV N F are increased over
time while the requests for the LP LCV N F are stable. As the
HP LC V N F needs to scale-out at the minute 85, the script
commands the VIM to live migrate the LP LCV N F from the
MEC to the cloud, thus freeing up resources for the scale-out
of the H P LC V N F . The live migration process, at the minute
85, lasts 28 seconds, for a VM with 1 vCPU, 512MB RAM and
3GB local storage, while no service interruption was observed.
Please note that during the live migration process, we notice a
slightly increased response time for the LP LCV NF that is
not violating the SLA neither during nor after the migration
has been completed. Finally, when the scale-in action occurs
at the minute 145, the LP LCV N F is migrated back to its
original hypervisor, in accordance with Alg. 2.
VI. CONCLUSION
In this paper we presented a MEC-enabled 5G IoT archi-
tecture, able to exploit the interplay between the core and
edge tiers in an NFV environment. We discussed the key
enabling technologies and filled the gap between the NFVO
and the VIM entities by proposing embedding and scheduling
algorithms for the initial placement and online reallocation
of the VNFs respectively, leading in enhanced VNF lifecycle
management. We applied our algorithms to a fully deployed
MEC-enabled 5G testbed implementation, where applications
with different priorities and latency constraints have been
executed. The conducted experiments shown that through the
proposed schemes, a better utilization of MEC and cloud
resources can be obtained on the fly, enabling the system to
serve a higher number of latency critical applications without
SLA violation.
As future analysis, we aim to extend our work by including
the RAN part, both in our architecture and in the real de-
ployment environment. Furthermore, we will investigate more
VNF allocation and placement policies and we will measure
the overhead introduced by the live migration process.
REFERENCES
[1] “Ericsson Mobility Report,” June 2019. [Online]. Available: https:
//www.ericsson.com/en/mobility-report/reports/june-2019
[2] “Cisco Visual Networking Index: Forecast and Trends,
2017 - 2022,” White Paper. [Online]. Available:
https://www.cisco.com/c/en/us/solutions/collateral/service-provider/
visual-networking- index-vni/white-paper-c11-741490.pdf
[3] ETSI NFV, “Network Functions Virtualisation (NFV); Man-
agement and Orchestration,” December 2014. [Online]. Avail-
able: https://www.etsi.org/deliver/etsi gs/NFV-MAN/001 099/001/01.
01.01 60/gs NFV- MAN001v010101p.pdf
[4] A. J. Gonzalez, G. Nencioni, A. Kamisiski, B. E. Helvik, and P. E.
Heegaard, “Dependability of the NFV Orchestrator: State of the art and
research challenges,” IEEE Communications Surveys Tutorials, vol. 20,
no. 4, pp. 3307–3329, Fourthquarter 2018.
[5] S. Sharma, R. Miller, and A. Francini, “A cloud-native approach to 5g
network slicing,” IEEE Communications Magazine, vol. 55, no. 8, pp.
120–127, Aug 2017.
[6] ETSI NFV, “MEC in 5G networks,” June 2018, White Paper No. 28.
[Online]. Available: https://www.etsi.org/images/files/ETSIWhitePapers/
etsi wp28 mec in 5G FINAL.pdf
[7] P. Porambage, J. Okwuibe, M. Liyanage, M. Ylianttila, and T. Taleb,
“Survey on multi-access edge computing for internet of things real-
ization,” IEEE Communications Surveys Tutorials, vol. 20, no. 4, pp.
2961–2991, Fourthquarter 2018.
[8] P. Mach and Z. Becvar, “Mobile edge computing: A survey on ar-
chitecture and computation offloading,” IEEE Communications Surveys
Tutorials, vol. 19, no. 3, pp. 1628–1656, thirdquarter 2017.
[9] C. X. Mavromoustakis, J. M. Batalla, G. Mastorakis, E. Markakis, and
E. Pallis, “Socially oriented edge computing for energy awareness in
iot architectures,” IEEE Communications Magazine, vol. 56, no. 7, pp.
139–145, July 2018.
[10] Y. Nikoloudakis, E. Markakis, G. Alexiou, S. Bourazani, G. Mastorakis,
E. Pallis, I. Politis, C. Skianis, and C. Mavromoustakis, “Edge caching
architecture for media delivery over p2p networks,” in 2018 IEEE 23rd
International Workshop on Computer Aided Modeling and Design of
Communication Links and Networks (CAMAD), Sep. 2018, pp. 1–5.
[11] C. X. Mavromoustakis, G. Mastorakis, and J. Mongay Batalla, “A mobile
edge computing model enabling efficient computation offload-aware
energy conservation,” IEEE Access, vol. 7, pp. 102 295–102 303, 2019.
[12] H. Liao, Z. Zhou, S. Mumtaz, and J. Rodriguez, “Robust task offloading
for IoT Fog computing under information asymmetry and information
uncertainty,” in ICC 2019 - 2019 IEEE International Conference on
Communications (ICC), May 2019, pp. 1–6.
[13] M. Mukherjee, S. Kumar, M. Shojafar, Q. Zhang, and C. X. Mavromous-
takis, “Joint task offloading and resource allocation for delay-sensitive
fog networks,” in ICC 2019 - 2019 IEEE International Conference on
Communications (ICC), May 2019, pp. 1–7.
[14] X. He, R. Jin, and H. Dai, “Deep PDS-Learning for Privacy-Aware
Offloading in MEC-Enabled IoT,” IEEE Internet of Things Journal,
vol. 6, no. 3, pp. 4547–4555, June 2019.
[15] Z. Ning, P. Dong, X. Kong, and F. Xia, “A Cooperative Partial
Computation Offloading Scheme for Mobile Edge Computing Enabled
Internet of Things,” IEEE Internet of Things Journal, vol. 6, no. 3, pp.
4804–4814, June 2019.
[16] T. Taleb, K. Samdanis, B. Mada, H. Flinck, S. Dutta, and D. Sabella, “On
multi-access edge computing: A survey of the emerging 5g network edge
cloud architecture and orchestration,” IEEE Communications Surveys
Tutorials, vol. 19, no. 3, pp. 1657–1681, thirdquarter 2017.
[17] S. Shahzadi, M. Iqbal, T. Dagiuklas, and Z. U. Qayyum, “Multi-
access edge computing: open issues, challenges and future perspectives,”
Journal of Cloud Computing, vol. 6, no. 1, p. 30, Dec 2017.
12
[18] ETSI Group Specification, “Multi-access Edge Computing (MEC);
Framework and Reference Architecture,” January 2019, GS MEC 003
V2.1.1. [Online]. Available: https://www.etsi.org/deliver/etsi gs/MEC/
001 099/003/02.01.01 60/gs MEC003v020101p.pdf
[19] L. Wang, Z. Lu, X. Wen, R. Knopp, and R. Gupta, “Joint optimization
of service function chaining and resource allocation in network function
virtualization,” IEEE Access, vol. 4, pp. 8084–8094, 2016.
[20] S. Herker, X. An, W. Kiess, S. Beker, and A. Kirstaedter, “Data-
center architecture impacts on virtualized network functions service
chain embedding with high availability requirements,” in 2015 IEEE
Globecom Workshops (GC Wkshps), Dec 2015, pp. 1–7.
[21] A. Laghrissi and T. Taleb, “A survey on the placement of virtual
resources and virtual network functions,” IEEE Communications Surveys
Tutorials, vol. 21, no. 2, pp. 1409–1434, Secondquarter 2019.
[22] A. Laghrissi, T. Taleb, M. Bagaa, and H. Flinck, “Towards edge slicing:
Vnf placement algorithms for a dynamic amp; realistic edge cloud envi-
ronment,” in GLOBECOM 2017 - 2017 IEEE Global Communications
Conference, Dec 2017, pp. 1–6.
[23] D. B. Oljira, K. Grinnemo, J. Taheri, and A. Brunstrom, “A model for
qos-aware vnf placement and provisioning,” in 2017 IEEE Conference
on Network Function Virtualization and Software Defined Networks
(NFV-SDN), Nov 2017, pp. 1–7.
[24] V. Eramo, E. Miucci, M. Ammar, and F. G. Lavacca, “An approach
for service function chain routing and virtual function network instance
migration in network function virtualization architectures,” IEEE/ACM
Transactions on Networking, vol. 25, no. 4, pp. 2008–2025, Aug 2017.
[25] Y. Jia, C. Wu, Z. Li, F. Le, and A. Liu, “Online scaling of nfv service
chains across geo-distributed datacenters,” IEEE/ACM Transactions on
Networking, vol. 26, no. 2, pp. 699–710, April 2018.
[26] H. Tang, D. Zhou, and D. Chen, “Dynamic network function instance
scaling based on traffic forecasting and vnf placement in operator
data centers,” IEEE Transactions on Parallel and Distributed Systems,
vol. 30, no. 3, pp. 530–543, March 2019.
[27] L. T. Bolivar, C. Tselios, D. Mellado Area, and G. Tsolis, “On the
deployment of an open-source, 5g-aware evaluation testbed,” in 2018 6th
IEEE International Conference on Mobile Cloud Computing, Services,
and Engineering (MobileCloud), March 2018, pp. 51–58.
[28] J. Haavisto, M. Arif, L. Lovn, T. Leppnen, and J. Riekki, “Open-
source rans in practice: an over-the-air deployment for 5g mec,” in 2019
European Conference on Networks and Communications (EuCNC), June
2019, pp. 495–500.
[29] H.-C. Hsieh, C.-S. Lee, and J.-L. Chen, “Mobile edge computing plat-
form with container-based virtualization technology for iot applications,”
Wireless Personal Communications, vol. 102, no. 1, pp. 527–542, Sep
2018.
[30] I. Sarrigiannis, E. Kartsakli, K. Ramantas, A. Antonopoulos, and
C. Verikoukis, “Application and network vnf migration in a mec-
enabled 5g architecture,” in 2018 IEEE 23rd International Workshop
on Computer Aided Modeling and Design of Communication Links and
Networks (CAMAD), Sep. 2018, pp. 1–6.
[31] M. Z. Khan, S. Harous, S. U. Hassan, M. U. Ghani Khan, R. Iqbal,
and S. Mumtaz, “Deep Unified Model For Face Recognition Based
on Convolution Neural Network and Edge Computing,” IEEE Access,
vol. 7, pp. 72 622–72 633, 2019.
[32] A. C. Baktir, A. Ozgovde, and C. Ersoy, “How Can Edge Computing
Benefit From Software-Defined Networking: A Survey, Use Cases, and
Future Directions,” IEEE Communications Surveys Tutorials, vol. 19,
no. 4, pp. 2359–2391, Fourthquarter 2017.
[33] “The Openstack Foundation.” [Online]. Available: www.openstack.org
[34] “Open Source MANO.” [Online]. Available: osm.etsi.org
Ioannis Sarrigiannis (S’10) received his five-year
diploma (MSc equivalent) in Information and Com-
munication Systems Engineering in 2015 from the
University of the Aegean, Samos, Greece. Currently,
he is a Marie Curie Researcher at Iquadrat Infor-
matica S.L., in Barcelona, Spain, and is pursuing
his Ph.D. degree in Signal Theory and Communica-
tions (TSC) with Polytechnic University of Catalo-
nia (UPC), Barcelona, Spain. His current research
interests include Software Defined Networks, Net-
work Function Virtualization, VNF placement and
orchestration and Network Slicing, in the scope of cloud-edge architectures
towards the realization of 5G concepts.
Kostas Ramantas has received the Diploma of
Computer Engineering, the MSc degree in Computer
Science and Engineering and the PhD degree from
the University of Patras, Greece, in 2006, 2008
and 2012 respectively. Up to now, he has been
the recipient of two national scholarships and has
participated in the EC funded ICT-BONE and ePho-
ton/One+ Networks of Excellence, conducting joint
research with many European research groups. His
research interests are in modelling and simulation
of network protocols, and scheduling algorithms for
QoS provisioning. Dr Ramantas is a member of the Technical Chamber of
Greece. In June 2013, he joined IQUADRAT as a senior researcher and is
actively involved in EU-funded research projects.
Elli Kartsakli (S’07-M’09-SM’15) received her
Ph.D. in Wireless Telecommunications from the
Technical University of Catalonia (UPC) in February
2012. Her primary research interests include proto-
cols and architectures for 5G networks and beyond,
with focus on resource orchestration and slicing
mechanisms, energy-efficiency networking for cel-
lular and sensor systems, and cross-layer medium
access control (MAC) optimization for multiuser and
cooperative scenarios.
Prodromos-Vasileios Mekikis has received his PhD
degree from the Department of Signal theory and
Communications of the Technical University of Cat-
alonia (UPC), Spain, in 2017. His main research
interests include Network Function Virtualization,
Wireless Energy Harvesting and connectivity in mas-
sive IoT networks.
Angelos Antonopoulos (S’10–M’12–SM’15) re-
ceived the Ph.D. degree from the Technical Univer-
sity of Catalonia (UPC) in 2012. He is currently
a Senior Researcher with CTTC/CERCA. He has
authored over 100 peer-reviewed publications on
various topics, including energy efficient network
planning and operation, radio resource management,
data caching and dissemination, cooperative com-
munications and network economics. He currently
serves as an Associate Editor in IEEE Networking
Letters, IEEE Access and Computer Networks (El-
sevier). He was nominated as Exemplary Reviewer for the IEEE Communi-
cations Letters (2015), Reviewer of the Month for the IEEE Access (June
2018) and Outstanding Reviewer for Sensors (2018), while he has received
the best paper award in IEEE GLOBECOM 2014, the best demo award in
IEEE CAMAD 2014, the 1st prize in the IEEE ComSoc Student Competition
(as a Mentor) and the EURACON best student paper award in EuCNC 2016.
Christos Verikoukis (S’95-AM’04-M’04–SM’07)
received the Ph.D. degree from the Polytechnic Uni-
versity of Catalonia (UPC) in 2000.He is currently
a Fellow Researcher with the Centre Tecnolgic de
Telecomunicacions de Catalunya/Instituci dels Cen-
tres de Recercade Catalunya (CTTC/CERCA) and an
Adjunct Associate Professor with the University of
Barcelona. He has coauthored 128 journal and more
than 200 conference papers, as well as four books,
20 chapters, and three patents. He has participated in
over 40 competitive projects and was the Principal
Coordinator in three EC and four national funded projects. He has supervised
18 Ph.D. students and seven postdoctoral researchers.
Dr. Verikoukis was the recipient of the Best Paper Award at IEEE In-
ternational Conference on Communications 2011, IEEE GLOBECOM 2014
and 2015, and EUCNC/EURACON2016, and the EURASIP 2013 Best Paper
Award for the Journal on Advances in Signal Processing. He is currently
Associate Editor-in-Chief for the IEEE NETWORKING LETTERS and
Member-at-Large of Globecom/ICC Technical Content (GITC).