Conference PaperPDF Available

LL-MEC: Enabling Low Latency Edge Applications

Authors:

Figures

Content may be subject to copyright.
LL-MEC: Enabling Low Latency Edge Applications
Navid Nikaein, Xenofon Vasilakos and Anta Huang
Communication Systems Department, EURECOM, Biot, France 06410
firstname.lastname@eurecom.fr
Abstract—We present LL-MEC, the first open source Low-
Latency Multi-access Edge Computing (MEC) platform enabling
mobile network monitoring, control, and programmability while
retaining compatibility with 3GPP and ETSI specifications. LL-
MEC achieves coordinated resource programmability in end-to-
end slicing scenarios by leveraging SDN towards an appropriate
allocation of resources, thus drastically improving the perfor-
mance of slices. We evaluate LL-MEC in three practical case
studies, namely, (i) end-to-end mobile network slicing, (ii) RAN-
aware video content optimization and (iii) IoT gateway, and show
that it achieves a 2-4x lower user plane latency compared to LTE,
while enabling low latency edge applications to operate on a per
millisecond basis. Also, we highlight the benefits of RAN-aware
applications in improving user Quality of Experience (QoE),
showing a significant user latency reduction along with a much
lower variability compared to legacy LTE. Last, a compatibility
evaluation of LL-MEC over the OpenAirInterface real-time LTE
platform demonstrates the scalability merits of LL-MEC due
to the use of an OpenFlow Virtual Switch for the user plane
function, rather than a Linux kernel in typical LTE setups.
Index Terms—MEC, SDN, Programmability, LTE
I. INTRODUCTION
One of the key challenges towards a 5G era in mo-
bile networking is the ever-increasing demand for resource-
hungry, content-rich services such as HD video streaming
and augmented reality, which require both low latency and
high reliability. Another challenge stems from the Internet of
Things (IoT) use cases, which demand throughput provision-
ing, reliability, and low-latency connectivity among a large
number of devices. Rather than redesigning the architecture,
several popular solutions try to address these challenges via
network slicing [1]–[3] and IoT gateway [4]. In this context,
the ETSI-specified Multi-access Edge Computing (MEC) pro-
vides a low latency cloud environment for applications at the
network’s edge to monitor and control the underlying networks
in close proximity with the users, hence posing a remedy
for the aforementioned challenges. As an example, video
streaming requests by User Equipment (UE) can be served via
MEC-hosted applications, as MEC can program data paths and
redirect traffic to local or remote serving nodes that improve
the perceived user experience in a totally transparent manner.
MEC is characterized by its proximity to the Radio Access
Network (RAN) as well as for providing real-time access to
radio network information to applications. This feature, in
particular, highlights low latency as a key point in MEC,
while Multi-RAT connectivity provides interoperability and
coordination to cater the needs of different access technologies
through appropriate network abstraction, enabling a unified
User Plane (UP) convergence that is reflected in the term
“Multi-access”. Last, besides its technical benefits, MEC
opens the network to authorized third-parties who can rapidly
deploy innovative applications, thus creating a new market and
an unprecedented value chain in mobile networking.
Considering that programmability is a key MEC require-
ment, Software-Defined Networking (SDN) poses a promis-
ing approach that is already extensively used in non-mobile
networks. Along these lines, we can leverage the well-
defined OpenFlow [5] SDN protocol for moving the Control
Plane (CP) away from physical devices and for abstracting
the underlying infrastructure, creating an unparalleled series
of innovation and customization opportunities of network
applications. Also, the success of SDN in non-mobile net-
working gives the right motivation to apply SDN in the Core
Network (CN) of LTE [6]. By separating the CP from the
UP, SDN virtualizes the mobile network components such as
the Mobility Management Entity (MME), the Control planes
of the Serving-Gateway (S-GW-C) and the Packet-Gateway (P-
GW-C) as potential MEC applications. In essence, MEC can
leverage CN programmability and further extend it in the RAN
segment to further delegate control decisions.
A. Contribution
A considerable research interest on SDN and MEC has
focused on conceptual frameworks, yet in absence of an open
source platform for evaluating the benefits of SDN-enabled
MEC services. This motivates us to explore and demonstrate
coordinated network programmability through our MEC plat-
form and an ecosystem of network applications running on top
of it. The main points of our contribution can be listed as:
1) LL-MEC: We contribute the first open source 3GPP-
compliant implementation of a Low Latency Multi-access
Edge Computing (LL-MEC) platform that covers multiple APIs
aligned to ETSI MEC specifications. LL -M EC is an extended
and concrete implementation of [7], using the extended Open-
Flow [5] and FlexRAN [8] protocols, and addressing three
types of latency: (i) User latency, defined as an end-to-end
user transport latency; (ii) Control latency, which captures
the latency of the underlying network MEC to perform an
action on behalf of an edge application, e.g. for control and/or
monitoring; and (iii) Application latency, which represents the
latency for performing MEC actions by edge applications.
2) Network slicing: We enable network slicing in both the
edge and core network segments by sharing physical resources
across multiple logically isolated networks. Building upon
MEC and SDN, LL -M EC achieves a coordinated resource978-1-5386-6831-3/18/$31.00 ©2018 European Union
(radio resources, switching bandwidth) and UP programma-
bility for end-to-end slicing, hence drastically improving per-
formance through appropriate resource allocation.
3) Platform evaluation & validation: We evaluate the per-
formance of LL -M EC in three different practical case studies:
(i) end-to-end mobile network slicing; (ii) RAN-aware video
content optimization; and (iii) IoT gateway featuring dedicated
core network (DCN), paving the way for the 5G community
to engage into new investigation directions via our platform in
further case studies. In a nutshell, our thorough results validate
that LL-MEC yields a significant user latency reduction along
with a much lower variability compared to legacy LTE. Also,
a compatibility evaluation over the OpenAirInterface (OAI)
real-time LTE platform [9] proves that LL-MEC exhibits
significant scalability merits against a massive number of
UEs, due to the Open Virtual Switch (OVS) used for the UP
functionality, rather than a Linux kernel in legacy LTE setups.
B. Paper structure
We discuss the implementation of the LL-MEC platform
for SDN-based mobile networks in Sec. II, providing coordi-
nated CP and UP programmability. In Sec III, we provide a
compatibility assessment of LL -M EC over the OAI [9], using
Commercial Off-The-Shelf (COTS) UEs. Also, we present a
system performance assessment in terms of CPU load and
traffic latency, showing significant user latency gains and
good scalability features. Next, Sec. IV presents a thorough
evaluation of the LL- ME C platform in three practical case
studies Finally, Sec. V outlines the related work before we
conclude and discuss our future work goal in Sec. VI.
II. LL-MEC DESIGN & IMPLEMENTATION
This section provides an overview of the architecture and
identifies the design challenges for realizing a low latency
MEC platform. Fig. 1 portrays that the MEC application
manager stands as the foundation for the upper-most layers,
providing the (Mp1) programming interface for applications.
The middle layer includes two core components, namely,
the Radio Network Information Service (RNIS) and Edge
Packet Service (EPS) that manage the RAN and CN network
services, respectively, based on the CP and UP APIs in the
abstraction layer, respectively. Standing at the bottom-most
layer, eNBs and OpenFlow-enabled switches comprise the
UP functions. FlexRAN and OpenFlow comprise the CP
functions and abstract all information and expose it via the
abstraction API (Mp2). The platform operates on a software-
defined mobile network consisting of multiple LTE eNBs and
physical or software OpenFlow-enabled switches. Following
the separation of CP from UP, we adopt “X-GW-C” and “X-
GW-U” as the corresponding notations for 4G service (S-GW)
and packet (P-GW) gateways and 5G session management
function (SMF) and user-plane function (UPF). Note that
According to ETSI specifications the Mp1 and Mp2 reference
points comprise interfaces between the different layers.
A key point in LL-MEC relies on its software-development
kit (SDK) providing a unified applications development en-
vironment that allows to apply coordinated control deci-
LL-MEC
Low Latency Applications
MEC Application
Northbound API Southbound API
Elastic Applications
MEC Platform
Abstraction
CP API UP API
Mp2
MEC Application Manager
eNodeB
eNodeB
Internet
X-GW-U
X-GW-U
OpenFlow
OpenFlow
X-GW-U
FlexRAN
FlexRAN
Legends
REST API/ Message Bus
Control Plane
User Plane
OpenFlow-enabled Switch
Mp2
Core API REST APIMessage Bus
SDK
Mp1 Mp1
Mp1
Radio Network Information Service
(RNIS)
Edge Packet Service
(EPS)
OpenFlow
Library
Radio Information Base
(RIB)
Fig. 1: High-level schematic of LL- MEC.
sions across the RAN and CN segments. The FlexRAN
and OpenFlow abstraction protocols used for the RAN and
the CN, respectively, facilitate communication among the
different network elements. Along with their corresponding
APIs, FlexRAN and OpenFlow are integrated in LL-MEC to
enable a two-way interaction between them, hence allowing
to fulfill requests from a limitless set of edge applications
and to execute precise tasks onto the underlying networks.
Besides this, LL -M EC is designed to support time critical
RAN operations and the deployment of different priority-level
applications when interacting with the platform.
A. Design Challenges
A first key design challenge for realizing LL-MEC refers to
the separation of the CP from the UP throughout the RAN and
the CN, whereas a second challenge regards the coordinated
CP and UP programmability across the RAN and the CN
with real-time access to RAN information. Another challenge
regards scalability with the large number of users and services
and, finally, a fourth challenge refers to the flexibility of
registering low latency applications and services in order to
support time-critical control decisions, priorities and deadlines.
B. Mobile Network Abstraction
The abstraction layer models the required operations for
the underlying network through a unified interface. In LL-
MEC, the CP API and UP API comprise the abstraction layer
for RAN and CN, respectively, by providing the necessary
information for MEC platform and applications development.
LL-MEC control protocols are divided into: (i) RAN enabled
by FlexRAN [8] and (ii) CN through OpenFlow [5]. FlexRAN
provides an abstract view of the radio network status(e.g.,
signal strength) by extracting parameters with a required
granularity level. Also, it enables to modify and control the
OF Rules Setup
X-GW-U LL-MEC X-GW-
C
MMEeNodeB
LL-MEC UE Setup Rules Request
LL-MEC UE Setup Rules R esponse
LTE Modify Bearer Req
LTE Modify Bearer Resp
LTE Initia l Attach / EMM Service Request
S1/S5/S8 Bearer Established
Fig. 2: Sequence diagram of S1/S5/S8 bearer setup.
state of the underlying network, passing control decisions
per subframe, e.g. for reconfiguring resource blocks for UEs.
OpenFlow, provides a fine-grain programmable UP through the
abstraction of the underlying data paths, allowing the switch
to handle GPRS Tunneling Protocol (GTP) packets in the CN.
C. Edge Packet Service
Edge Packet Service (EPS) is a main component, bringing
a native IP-service end-point for MEC applications to meet a
specific purpose. For example, UE incoming traffic is routed
after the rules setup in the switches by EPS and can be
altered dynamically to optimize routing. Being a core LL -
MEC entity, EPS offers the interfaces towards its northbound
and southbound, namely, Mp1 and Mp2 in Fig. 1.
Mp1 is the interface for MEC applications to instruct
basic and advanced functionalities in the underlying network
such as default/dedicated bearers (re-)establishment, QoS for
GBR traffic and specific requests. Considering LTE legacy
compatibility, Fig. 2 shows the relevant messaging for the
S1/S5/S8 bearer establishment procedure. Upon issuing a
Modify Bearer Request by either the LTE attach procedures or
EPS Mobility Management Service Request, X-GW-C notifies
LL-MEC for bearer establishment via a UE Setup Rules
Request, allowing it to trigger OpenFlow rules for setting
up the switch. When X-GW-C receives a UE Setup Rules
Response, the bearer establishment is confirmed. The message
for calling the UE Setup Rules Request API must include
user identities like uplink/downlink tunnel ID and bearer ID.
Likewise is the procedure for supporting QoS for GBR traffics
through OpenFlow meter and group tables.
Mp2 is the interface for MEC applications to instruct the UP
on routing traffic via OpenFlow rules. The types of rules that
EPS maintains in OpenFlow handler (Fig. 1) are: (i) default
rules pushed to OpenFlow-enabled switches on connection
establishment for handling Address Resolution Protocol (ARP)
and Domain Name System (DNS) queries; (ii) UE specific
rules for establishing the default and dedicated bearers for
UE; (iii) and MEC application rules pushed to OpenFlow-
enabled switches on events registered by applications. With a
well-defined full set of rules, UP gets fully separated from CP,
thus improving user latency (Sec. II-A and III-B).
D. Radio Network Information Service
Specified by ETSI MEC1RNIS in LL-MEC exposes real-
time RAN information to MEC applications such as radio
1ETSI GS MEC 002 MEC;Technical Requirements
bearer statistics, UE-related measurements and state changes,
or power measurements, by interacting with CP API. It is
possible to adjust the granularity of information per cell,
UE or radio access bearer, and to request information once,
periodically or when an event triggered. The CP API defines
a set of functions used by the UP to notify the CP about
events such as the initiation of a new Transmission Time
Interval (TTI). In order to have a clean separation of RAN
CP and UP, the FlexRAN protocol and the RAN Information
Base (RIB) [8] are integrated to LL- ME C. FlexRAN acts as
an abstraction layer allowing the management of the higher-
level control operations in a technology-agnostic way likewise
to how OpenFlow abstracts the datapath in the wired network.
The RIB maintains all statistics and configuration details about
the RAN that are accessed by applications. Furthermore, RNIS
can have direct and high priority access to the RIB on per
millisecond basis to ease control latency due to the integration
of RIB in LL -M EC. Thus, an edge application can, e.g., query
each user link quality to provide a quasi real-time throughput
indication in the next time window.
E. MEC Application Framework
MEC applications can be developed for any purpose and
without a detailed knowledge of the underlying network due
separating CP from UP. Mp1 and the SDK built on top of it
facilitate a programming environment, with the SDK offering
a uniform interface and means for platform communication
while abstracting the multiple choices of Mp1 including a
REST API, a message bus and a local API for different
application requirements. Examples include monitoring and
acquiring information through the message bus, managing
traffic rules via the REST API within 100 ms, or optimizing
content based on radio quality within a single ms. Applications
can also access basic LL -M EC functionalities through Mp1
such as a service registry and an event mechanism [7].
Another pivotal LL-MEC feature regards its different appli-
cation scheduling recipes like round-robin or deadline-based
for adapting different task priorities. This significantly lowers
application latency and meets with control deadlines.
III. SYS TE M EVALUATI ON
We deploy a LL-MEC platform with one and multiple
OpenFlow-enabled switches, using Open vSwitch v2.5.1 under
OpenFlow protocol v1.3 for handling UP traffic.2In order to
have GTP tunneling functionality, we applied an OVS patch
to the Open vSwitch 2.5.1 implementation. It is, however,
important to note that a physical switch with OpenFlow and
GTP supports can also work with LL-MEC. Fig. 3 illustrates
two different setups: SDN-based LTE with LL-MEC and
legacy LTE. All components, such as eNB, CN, Open vSwitch
and LL -M EC run on a commodity Linux-based PC equipped
with dual-core i5-661 CPU at 3.3GHz and 4GB of RAM.
Depending on the experiment, LL-MEC is connected either
to a massive S1-U packet generator via Gigabit Ethernet or
2http://mosaic-5g.io/ll- mec/
LL-MEC
Open vSwitch
Control Plane
User Plane
COTS UE OAI eNB
Local
Internet
X-GW-C
MME
(a) Setup for LL-MEC in SDN-based LTE.
PDN
OAI Core Network
COTS UE OAI eNB
Local
MME SGW PGW
Internet
(b) Setup for legacy LTE.
Fig. 3: The evaluation setup.
to OpenAirInterface (OAI) LTE eNB with a radio front-end
(Ettus B210 USRP) and COTS UEs (Nexus 6p and HUAWEI
E392 4G LTE dongle). The massive S1-U packet generator is
based on the Python Scapy library to craft and send customized
GTP packets with different packet sizes and inter-departure
times down to every ms. The massive S1-U packet generator
can send traffic with GTP encapsulated up to 10000 UEs
simultaneously. This way, we assess LL-MEC user latency
compared to legacy LTE under high-load conditions. We
conduct two types of experiments: (i) Compatibility with
measurements taken in a real LTE network with COTS UEs to
evaluate the throughput performance; and (ii) Scalability with
measurements taken in a real LTE CN with generated UE-to-
eNB traffics to evaluate the benefits of CN offloading when
redirecting both the CP and UP traffic.
A. Compatibility
We setup two testbeds: (i) an SDN-based LTE network with
LL-MEC deployed and (ii) a legacy LTE network (Fig. 3).
We use the OAI in both setups as a real-time 3GPP compliant
LTE environment for attaching one COTS UE over the air.
OAI allows to verify that our SDN-based LL-MEC can
operate with full LTE functionality, thus providing the related
latency measurements. All measurements use the same eNB
configuration, namely, FDD with transmission mode 1 (SISO)
and 5 MHz channel bandwidth in band 7. In addition, in
SDN-based LTE setup LL-MEC requires the user identities
along with appropriate rules to be applied onto the OpenFlow-
enabled switches for establishing the UP.
TABLE I: Throughput with 5MHz channel bandwidth
Setup Mean (MB/s) Std. dev. Min Max
Downlink Legacy LTE 15.691 1.648 11.5 18.9
LL-MEC 15.112 0.67 14.9 16.7
Uplink Legacy LTE 8.214 1.059 4.19 11.5
LL-MEC 8.197 0.644 7.34 9.44
Table I shows throughput recorded with iperf over a 60 sec
period on a per second measurement basis. COTS UEs in either
setup have full Internet access reaching maximum throughput
(15MB/s in downlink and 8Mb/s in uplink) for a 5MHz channel
bandwidth. We observe a better stability in both downlink and
uplink throughput (lower standard deviation) for LL-MEC due
to the separation of CP and UP.
B. Scalability
We study LL-MEC scalability with the number of UEs. In
what follows, we consider both CP and UP scalability.
Control Plane: Establishing default and dedicated bearers
in LL -M EC (see Sec. II-C) takes place through an interaction
among X-GW-C, EPS and switches. This induces extra control
signaling compared to the establishment of UP in legacy LTE
(e.g., tunnel end-points setup and iptables) due to UE, QoS and
OpenFlow setup rules. We measure the total payload in terms
of transmitted bytes 3used to establish default and dedicated
bearers as a function of the number of UEs for both LL-MEC
and the legacy LTE network. To assess the total signaling over-
head, the X-GW-C is placed outside the LL-MEC platform.
In addition, we characterize the contribution of OpenFlow
setup rules to the total overhead. Fig. 4 shows that LL- ME C
introduces a signaling overhead for both default and dedicated
bearer establishment due to the messages for setting up UE-
specific, QoS and OpenFlow rules between X-GW-C, EPS and
switches (see Fig. 2 for the message exchanges). This is the
cost of providing UP programmability towards applications.
However, this overhead can be significantly reduced if X-GW-
C is deployed as a service on top of LL-MEC, in which
case the traffic of S11 and S5/S8 can be transmitted locally.
Thus, the only remaining overhead is for OpenFlow setup rules
from EPS to OpenFlow-enabled switches, which significantly
lowers signaling (see Fig. 4, labeled as OpenFlow rules setup).
User Plane: We deploy a massive S1-U packet generator to
transmit a large number of GTP encapsulated ICMP packets
in order to determine the load of SDN-based core entities.
We generate traffic as soon as the generator and MME are
instantiated to gradually increase the number of UEs as shown
in Table II. LL-MEC traffic originates from different numbers
of UEs every 100 ms with 1400KB payload. The Round Trip
Time (RTT) is measured upon receiving the Echo-Reply ICMP
packet. Our setup is likewise to the one in Fig. 3, only the
massive S1-U packet generator acts as both UEs and eNBs
sending GTP traffic over the S1-U interface directly through
the wired network. For the CN, we use OAI with the MME
scenario player feature, allowing the network to emulate the
attachment procedure of a large number of UEs.
Figure 5(a) shows that S-GW CPU load increases dras-
tically, reaching 50% for 100 UEs contrary to LL-MEC
for which it remains 5%. CPU usage is measured for the
entire S-GW machine, with a 50% of CPU resource usage
translating to a single fully-loaded processing core. Therefore,
when the number of UEs reaches to 5000 the S-GW is
already overloaded, contrary to only 6% in the case of LL-
MEC. This is because the OAI S-GW-C implementation is a
single-threaded process, thus it cannot scale with the number
of available cores. Nevertheless, the S-GW CP is not the
real bottleneck, as it is only in charge of the establishment
3Sum of each message payload related to LL-MEC, LTE,and OpenFlow.
(a) Default bearer
(b) Dedicated bearer
Fig. 4: Control Signaling Overhead: colored regions show the contribution
of LL-MEC (yellow), legacy LTE network (green) and OpenFlow rules setup
(purple) to the total overhead.
of the UP. Data traffic is actually handled by the Linux
GTP kernel module, its corresponding library and iptable
rules used in OAI. Any observed performance loss is mainly
due to the Netfilter [10], which is the main cause of CPU
overloading. This result reveals the benefits of CP and UP
separation, as the CPU overhead for context switching can be
avoided to drastically improve scalability and performance,
thus highlighting the benefits of MEC in terms of dynamic
traffic offloading, scalability and resource efficiency.
(a) CPU usage
20 50 100 200 500 1000 2000 5000
Numbers of UEs
0
10
20
30
40
50
60
70
80
RTT(ms)
Legacy LTE Network
LL-MEC
(b) Latency
Fig. 5: CPU usage and Latency against the number of UEs.
Regarding RTT measurements, Fig. 5(b) indicates that LL-
MEC reduces user latency significantly and with a much lower
variability compared to legacy LTE. Also, LL- ME C latency
is 5ms for 20 UEs and 12ms for 5000 UEs with 0% packet
drop rate. Nevertheless, legacy LTE latency exhibits a spike
for 1000 UEs, i.e during the time when a massive packet
drop takes place (see Table II) due to CPU overloading (see
Fig. 5(a) and 5(b) together). This implies that processing each
packet in the legacy LTE setup requires more computing re-
sources than LL -M EC and poses scalability issues in general.
The outcome of this part of our evaluation is that the SDN-
based design of LL-MEC lowers UP latency by a factor of 2
on average and improves performance as an overall.
TABLE II: ICMP packet drop rate per number of UEs.
#UEs 20 50 100 200 500 1000 2000 5000
Legacy LTE 0% 0% 0% 0% 1.72% 7.86% 77.13% 77.09%
IV. CAS E STU DI ES
For all of the considered use cases presented next, we rely
on the OAI platform as an evolved LTE platform and keep the
same configuration as presented in Fig. 3.
A. End-to-End Mobile Network Slicing
Network slicing is a key enabler for sharing physical net-
work resource across multiple logically isolated networks in
5G, hence supporting a wide range of vertical segments with
a diverse set of performance and service requirements. LL-
MEC is a platform that enables slicing for achieving isolation
and performance guarantee in the UP by leveraging partially
the 3GPP enhanced dedicated core network (eDECOR)4con-
cept. A network slice is essentially a group of UEs having the
same requirements or belonging to the same administrative
domain, with no traffic or policy differentiation within a slice.
Figure 6 shows how LL-MEC enables network program-
ming for creating two network slices; one that is served locally
by MEC and gets a higher over-the-air performance, and
another one that gets a best-effort performance via direction to
a back-end server. We use an over-the-air LL-MEC in SDN-
based LTE. Slices are created with 1 COTS UE each, while
the percentage of radio resources and switching bandwidth for
each slice follows the corresponding slice-specific policies.
Edge Network
LL-MEC
OpenFlow switch
(X-GW-U)
X-GW-C
low latency network slice
best effort network slice
y
Datacenter
Video
Video
X-GW-C
Video
Video
Fig. 6: Overview of end-to-end network slicing.
We design a slice policy enforcement algorithm to apply
different RAN resource allocation strategies, and implement a
low latency MEC application interfacing with the LL-MEC
platform through the SDK. The EPS does dynamic routing
management at the edge through the SDK, while real-time
control can be delegated back to the RAN. To demonstrate the
benefits of end-to-end slicing, we change the enforced policy
on-the-fly and measure the resulted downlink throughput.
Specifically, we consider an uncoordinated and a coordinated
end-to-end resource programmability scheme in terms of radio
resources for the RAN and switching bandwidth for the CN
parts, with corresponding performance results appearing in
Fig. 7. For the case of uncoordinated programmability we
enforce a policy at time t=10s that implies 1 Mbps for slice 1
and 15 Mbps for slice 2. Then, we apply a second policy
at t=20s, this time only to RAN in order to lower the rate
down to 50% of radio resources (i.e., 8 Mbps) for each slice.
Finally, we enforce a third policy only to CN at t=33s to
increase the switching bandwidth to 6 Mbps. For coordinated
programmability, however, only one policy is enforced at t=18s
to both the RAN and the CN, so as to create a best-effort slice
with 1 Mbps and a low latency slice with 15 Mbps.
The results confirm the benefits of MEC and unified SDK
for enabling coordinated programmability and network slicing.
For uncoordinated slicing (Fig. 7(a)), bandwidth is not used
43GPP 23.707 Release 13; Stage 2 (2014); 3GPP 23.711 Release 14 (2015).
0 102030405060
Time (s)
0
5
10
15
20
25
Throughput (Mb/s)
Slice 1
Slice 2
(a) Uncoordinated slicing
0 1020304050
Time (s)
0
5
10
15
20
25
Throughput (Mb/s)
Best Effort Slice
Low Latency Slice
(b) Coordinated slicing
Fig. 7: Mobile network slicing.
efficiently due to the asynchronous/uncoordinated resource
allocation between RAN and CN. For coordinated slicing
(Fig. 7(b)), however, the anticipated performance gap between
the “Low-Latency Slice” and the “Best-Effort Slice” is evident,
while resources get appropriately allocated to each slice in
accordance to their specific requirements.
B. RAN-aware Video Content Optimization
We consider video optimization as a low latency application
and study the benefits of RAN-aware applications on improv-
ing user QoE. We monitor the cell load status and radio link
quality obtained from RNIS in order to adjust content quality
(e.g., via transcoding) at the server, parallel to enforcing a new
resource allocation policy to the underlying RAN. This allows
to jointly improve both network efficiency and user QoE (e.g.,
by avoiding buffer freeze). Note that in this use case we use
alow latency network slice, as previously described in Fig. 6.
TABLE III: CQI index mapped as max TCP throughput
CQI Downlink (Mb/s) Uplink (Mb/s)
15 15.224 8.08
11 11.469 6.04
9 9.88 4.47
7 5.591 2.49
4 1.08 0.69
We implement a simple HTTP video streaming application
on top of LL-MEC and the choose channel quality indicator
(CQI) as a flag to reflect radio link quality for each UE.
When a UE accesses the video service, the LL -M EC can (i)
program the routing path and redirect traffic to one of the MEC
applications if the requested service is matched, e.g., based on
the destination IP address), and (ii) adapt the rate according to
the estimated UE throughput. There are multiple approaches to
throughput guidance on the top of the LL -M EC RNIS module
like exponential moving average or even a discrete link-quality
to throughput mapping. Table III shows the maximum video
TCP bitrate through a discrete mapping between CQI and
a sustainable TCP throughput identified during experiments.
This value serves as a predicted user throughput allowing the
server to adjust transcoding. As a follow-up to Sec. IV-A,
coordinated slicing and joint programmability managed by
authorized MEC applications results in an effective mobile
network. Also, note that the timescale of detecting CQI
changes is significantly less than the one in the TCP congestion
mechanism. Instead of a reactive, a proactive adaptation of the
service demand is also feasible through RNIS.
UE MMEeNodeB X-GW-UDe-X-GW-U
2. Initial UE
4.Initial Context Setup Req (With tunnel info of De-X-GW-U)
ŅŦŧŢŶŭŵġŃŦŢųŦųġņŴŵŢţŭŪŴũŦť
5. Continue Bearer and OpenFlow Rules Setup
1. Attach Req
3. Slice ID Identified
Fig. 8: Sequence diagram of DCN
C. IoT Gateway
In this last case study we consider LL -M EC as a platform
to deploy an IoT Gateway at the edge, leveraging again
(see Sec. IV-A) the eDECOR concept for CN slicing. Fig. 8
portrays a simplified sequence diagram on how user traffic
is directed to a dedicated X-GW-U (De-X-GW-U) in LL-
MEC based on slice IDs. Upon the reception of an attachment
request containing the slice ID, MME/X-GW-C maps the UE
slice ID (stored in HSS) to the De-X-GW-U and initiates a set
of OpenFlow rules for this newly instantiated switch. Then,
the tunnel information setup for De-X-GW-U is included in a
Initial Context Setup Request sent to the eNB. At this point,
the dedicated user UP is established between the eNB and
the corresponding switch. We consider 2000 devices equally
split into two slices of a 1000 devices each. We use our
massive S1-U packet generator for sending sensory data to
dedicated switches in accordance to device slice IDs. Latency
measurement shown in Fig. 9 for both with and without slicing
indicate that with a dedicated UP we can achieve not only
traffic isolation and scalability, but also improve performance
greatly by lowering traffic latency and the variability.
0 20406080100
Time (s)
0
5
10
15
20
25
30
35
40
RTT (ms)
IoT Slice 1
IoT Slice 2
IoT Slice on Default CN
Fig. 9: Latency measurements of isolated IoT slices.
V. RE LATED WORK
MEC attracts a considerable research interest [11]–[14]
form academy and industry with some specifications com-
pleted and work in progress. Initially, ETSI presented the
MEC ecosystem and main service scenarios in [15] to provide
a cloud computing environment for applications and content
in close proximity to the RAN. In addition, several MEC
services are proposed to offload tasks from mobile devices
to further reduce power consumption [16], [17]. The work
of [18] proposes a hierarchical MEC architecture leveraging
cloud computing and migrating mobile workloads for remote
execution at the cloud. A comparison among MEC, fog
computing and Cloudlet can be found in [19] and a complete
conceptual MEC architecture considering full functionalities,
interfaces, and applications in [20]. Likewise to ETSI MEC,
Cloudlet aims to provide computing resources at the networks’
edge. However, it is loosely-coupled with the underlaying
network, as it does not specify interfaces and data models
for the applications to interact with the network.
SDN is a building block of MEC featuring the decoupling
of CP and UP, the consolidation of the CP, and network
programmability through well-defined APIs. SDN came with
the invention of the OpenFlow concept [5] and has been
extensively used in wired networking. Using SDN for the CN
of LTE is an intuitive first step, with much work exploring this
concept [21]–[23] in mobile networks from different aspects
such as scalability, 3GPP interoperability, and performance
evaluation. SoftRAN [24] is a centralized CP for RAN that
abstracts all base stations into a virtual big base station.
FlexRAN [8] provides a flexible CP to build real-time RAN
control applications and remains flexible to realize different
degrees of coordination among RAN infrastructure entities.
VI. CONCLUSION AND FUTURE WO RK
We propose LL-MEC, a low-latency MEC platform that
exploits SDN to facilitate low-latency edge-routed user traffic
flows in mobile networks. Towards a desired performance, LL-
MEC employs the required flexibility and programmability to
coordinate decisions across different network segments while
remaining compliant with 3GPP specifications and ETSI MEC
ISG functionalities. Performance results reveal the benefits of
LL-MEC in reducing user and application latency in three
case studies, confirming its applicability in emerging IoT use
cases, content optimization, and network slicing. Future work
includes focusing on use cases deserving a more profound
study such as policy and charging control, location-aware
services and intelligent management towards a self-organizing
MEC platform, inspired by machine learning and works like on
congestion pricing [25]–[27] or [28]–[30]. Finally, LL-MEC
will support an earlier version of OpenFlow with interesting
features like meter action and extensible flow entry statistics.
ACKNOWLEDGMENT
This work has been funded in part through the European
Union’s H2020 program under grant agreement No 761913:
project SliceNet, and by the French Government (National
Research Agency, ANR) through the “Investments for the
Future” Program reference #ANR-11-LABX-0031-01.
REFERENCES
[1] N. Nikaein, E. Schiller, R. Favraud, K. Katsalis, D. Stavropoulos,
I. Alyafawi, Z. Zhao, T. Braun, and T. Korakis, “Network store:
Exploring slicing in future 5G networks,” pp. 8–13, 2015.
[2] A. Ksentini and N. Nikaein, “Toward Enforcing Network Slicing on
RAN: Flexibility and Resources Abstraction,” IEEE Communications
Magazine, vol. 55, no. 6, pp. 102–108, 2017.
[3] X. Foukas, M. Mahesh K., and K. Kontovasilis, “Orion: RAN Slicing
for a Flexible and Cost-Effective Multi-Service Mobile Network Archi-
tecture,” in 23rd Annual International Conference on Mobile Computing
and Networking (MobiCom ’17). ACM, 2017.
[4] M. V¨
ogler, J. M. Schleicher, C. Inzinger, and S. Dustdar, “A scalable
framework for provisioning large-scale IoT deployments,ACM Trans-
actions on Internet Technology (TOIT), vol. 16, no. 2, p. 11, 2016.
[5] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson,
J. Rexford, S. Shenker, and J. Turner, “OpenFlow: Enabling Innovation
in Campus Networks,” SIGCOMM Comput. Commun. Rev., 2008.
[6] X. Jin, L. E. Li, L. Vanbever, and J. Rexford, “SoftCell: Scalable and
Flexible Cellular Core Network Architecture,” in ACM Conference on
Emerging Networking Experiments and Technologies, 2013.
[7] A. Huang, N. Nikaein, T. Stenbock, A. Ksentini, and C. Bonnet, “Low
Latency MEC Framework for SDN-based LTE/LTE-A Networks,” in
Proc. of the IEEE International Conference on Communications, 2017.
[8] X. Foukas, N. Nikaein, M. M. Kassem, M. K. Marina, and K. Konto-
vasilis, “FlexRAN: A Flexible and Programmable Platform for Software-
Defined Radio Access Networks,” in Proc. of 12th International Con-
ference on Emerging Networking EXperiments and Technologies, 2016.
[9] Nikaein, Navid and Marina, Mahesh K. and Manickam, Saravana and
Dawson, Alex and Knopp, Raymond and Bonnet, Christian, “Openair-
interface: A flexible platform for 5g research,SIGCOMM Comput.
Commun. Rev., 2014.
[10] R. Niemann et al., “Performance Evaluation of netfilter: A Study on the
Performance Loss When Using netfilter as a Firewall,CoRR, 2015.
[11] P. Mach and Z. Becvar, “Mobile edge computing: A survey on archi-
tecture and computation offloading,CoRR, 2017.
[12] A. Ahmed and E. Ahmed, “A Survey on Mobile Edge Computing,” in
International conf. on intelligent Systems and COntrol (ISCO), 2016.
[13] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “Mobile edge
computing: Survey and research outlook,CoRR, 2017.
[14] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: Vision
and challenges,” IEEE Internet of Things Journal, Oct 2016.
[15] Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young, “Mobile
edge computing a key technology towards 5g,ETSI, 2015.
[16] M. T. Beck, M. Werner, S. Feld, and T. Schimper, “Mobile edge
computing: A taxonomy,” in Proceedings of International Conference
on Advances in Future Internet, 2014.
[17] M. T. Beck and S. Feld et al., “ME-VoLTE: Network functions for
energy-efficient video transcoding at the mobile edge,” in 18th Interna-
tional Conference on Intelligence in Next Generation Networks, 2015.
[18] L. Tong, Y. Li, and W. Gao, “A hierarchical edge cloud architecture for
mobile computing,” in Proceedings of the 35th Annual IEEE Interna-
tional Conference on Computer Communications, 2016.
[19] R. Roman, J. Lopez, and M. Mambo, “MEC, Fog et al.: A Survey and
Analysis of Security Threats and Challenges,” CoRR, 2016.
[20] C.-Y. Chang, K. Alexandris, N. Nikaein, K. Katsalis, and T. Spyropoulos,
“MEC Architectural Implications for LTE/LTE-A Networks,” in Work-
shop on Mobility in the Evolving Internet Architecture, 2016.
[21] Kostas Pentikousis, Yan Wang and Weihua Hu, “Mobileflow: Toward
Software Defined Mobile Networks,” IEEE Comm. Magazine, 2013.
[22] M. Martinello, M. R. N. Ribeiro et al., “Keyflow: a prototype for
evolving SDN toward core network fabrics,” IEEE Network, 2014.
[23] Van-Giang Nguyen and Younghan Kim, “Signaling Load Analysis in
Openflow-enabled LTE/EPC Architecture,” in Proc. of Intern. Confer-
ence on Information & Communication Technology Convergence, 2014.
[24] A. Gudipati, D. Perry et al., “SoftRAN: Software Defined Radio Access
Network,” in ACM SIGCOMM Workshop on Hot Topics in SDN, 2013.
[25] V. A. Siris, X. Vasilakos, and G. C. Polyzos, “Efficient proactive caching
for supporting seamless mobility,” in Proceeding of IEEE International
Symposium on a World of Wireless, Mobile and Multimedia Networks,
WoWMoM 2014, Sydney, Australia, June 19, 2014, 2014, pp. 1–6.
[26] X. Vasilakos, V. A. Siris, and G. C. Polyzos, “Addressing niche demand
based on joint mobility prediction and content popularity caching,”
Computer Networks, vol. 110, pp. 306–323, 2016.
[27] X. Vasilakos, M. Al-Khalidi, V. A. Siris, M. J. Reed, N. Thomos,
and G. C. Polyzos, “Mobility-based Proactive Multicast for Seamless
Mobility Support in Cellular Network Environments,” in Workshop on
Mobile Edge Communications, MECOMM ’17. ACM, 2017, pp. 25–30.
[28] V. Giannaki, X. Vasilakos, C. Stais, G. C. Polyzos, and G. Xylomenos,
“Supporting mobility in a publish subscribe internetwork architecture,”
in Proceedings of the 16th IEEE Symposium on Computers and Commu-
nications, ISCC 2011, Kerkyra, Corfu, Greece, June 28 - July 1, 2011,
2011, pp. 1030–1032.
[29] K. Poularakis and L. Tassiulas, “Code, Cache and Deliver on the Move:
A Novel Caching Paradigm in Hyper-Dense Small-Cell Networks,
IEEE Trans. Mob. Comput., vol. 16, no. 3, pp. 675–687, 2017.
[30] S. Zhang, P. He, K. Suto, P. Yang, L. Zhao, and X. Shen, “Cooperative
Edge Caching in User-Centric Clustered Mobile Networks,IEEE Trans.
Mob. Comput., vol. 17, no. 8, pp. 1791–1805, 2018.
... Long-term evolution (LTE) is one of several technologies that are utilized for task transfer in remote locations. However, OpenAirInterfaces with network slicing can be used instead of LTE for improved user quality of experience (QoE) in remote location task transfer [18]. The MEC platform can utilize location-based services for computational offloading to decrease latency for mobile users [19]. ...
Article
Full-text available
INTRODUCTION: The Internet of Things (IoT) has transformed daily life by interconnecting digital devices via integrated sensors, software, and connectivity. Although IoT devices excel at real-time data collection and decision-making, their performance on complex tasks is hindered by limited power, resources, and time. To address this, IoT is often combined with cloud computing (CC) to meet time-sensitive demands. However, the distance between IoT devices and cloud servers can result in latency issues.OBJECTIVES: To mitigate latency challenges, Mobile Edge Computing (MEC) is integrated with IoT. MEC offers cloud-like services through servers located near network edges and IoT devices, enhancing device responsiveness by reducing transmission and processing latency. This study aims to develop a solution to optimize task offloading in IoT-MEC environments, addressing challenges like latency, uneven workloads, and network congestion.METHODS: This research introduces the Game Theory-Based Task Latency (GTBTL-IoT) algorithm, a two-way task offloading approach employing Game Matching Theory and Data Partitioning Theory. Initially, the algorithm matches IoT devices with the nearest MEC server using game-matching theory. Subsequently, it splits the entire task into two halves and allocates them to both local and MEC servers for parallel computation, optimizing resource usage and workload balance.RESULTS: GTBTL-IoT outperforms existing algorithms, such as the Delay-Aware Online Workload Allocation (DAOWA) Algorithm, Fuzzy Algorithm (FA), and Dynamic Task Scheduling (DTS), by an average of 143.75 ms with a 5.5 s system deadline. Additionally, it significantly reduces task transmission, computation latency, and overall job offloading time by 59%. Evaluated in an ENIGMA-based simulation environment, GTBTL-IoT demonstrates its ability to compute requests in real-time with optimal resource usage, ensuring efficient and balanced task execution in the IoT-MEC paradigm.CONCLUSION: The Game Theory-Based Task Latency (GTBTL-IoT) algorithm presents a novel approach to optimize task offloading in IoT-MEC environments. By leveraging Game Matching Theory and Data Partitioning Theory, GTBTL-IoT effectively reduces latency, balances workloads, and optimizes resource usage. The algorithm's superior performance compared to existing methods underscores its potential to enhance the responsiveness and efficiency of IoT devices in real-world applications, ensuring seamless task execution in IoT-MEC systems.
... The Low-Latency MEC (LL-MEC) [87] platform is partially compliant with the ETSI MEC architecture. The LL-MEP aligns the Mp1 interface with the MEC application and the Mp2 interface with the virtual infrastructure to support low-latency MEC applications. ...
Article
Full-text available
Fifth-generation (5G) mobile networks fulfill the demands of critical applications, such as Ultra-Reliable Low-Latency Communication (URLLC), particularly in the automotive industry. Vehicular communication requires low latency and high computational capabilities at the network's edge. To meet these requirements, ETSI standardized Multi-access Edge Computing (MEC), which provides cloud computing capabilities and addresses the need for low latency. This paper presents a generalized overview for implementing a 5G-MEC testbed for Vehicle-to-Everything (V2X) applications , as well as the analysis of some important testbeds and state-of-the-art implementations based on their deployment scenario, 5G use cases, and open source accessibility. The complexity of using the testbeds is also discussed, and the challenges researchers may face while replicating and deploying them are highlighted. Finally, the paper summarizes the tools used to build the testbeds and addresses open issues related to implementing the testbeds.
... Multi-Access Edge Computing (MEC) [6] enhances edge capability through computational and storage resources called edge servers. A packet from clients is sent to the mobile edge but not passed through the core networks as the cloud architecture had been presented by Nikaein et al. [7] and Huang et al. [8]. This approach can potentially improve communication efficiency and shorten the interval time in data transmission. ...
Article
Full-text available
A 5G network can provide more comprehensive bandwidth connectivity for the industry 4.0 environment, which requires faster and tremendous data transmission. This study demonstrates the 5G network performance evaluation with MEC, without MEC, WiFi 6, and Ethernet networks. Usually, a 5G network engages with Multi-access Edge Computing, providing the computing functions dedicated to the users on edge nodes. The MEC network architecture presents significant facilities, a network schematic, and data transmission routers. The field test performs high-definition streaming video and heavy-traffic load testing to evaluate the performance based on different protocols by comparing throughput, latency, jitter, and packet loss rate. MEC network performance, streaming video performance, and load test evaluation results reveal that the 5G network working with MEC achieved better performance than when it was working without MEC. The MEC can improve data transmission efficiency by dedicated configuration but is only accessible with authentication from mobile network operators (MNOs). Therefore, MNOs should offer industrial private network users partial authentication for accessing MEC functionality to improve network feasibility and efficiency. In conclusion, this work illustrates the 5G network implementation and performance measurement for constructing a smart factory.
... Multi-Access Edge Computing (MEC) [6] enhances edge capability through computational and storage resources called edge servers. A packet from clients is sent to the mobile edge but not passed through the core networks as the cloud architecture had been presented by Nikaein et al. [7] and Huang et al. [8]. This approach can potentially improve communication efficiency and shorten the interval time in data transmission. ...
Article
Full-text available
Substantial improvements in the area of ultra reliable and low-latency communication (URLLC) capabilities, as well as possibilities of meeting the rising demand for high-capacity and high-speed connectivity are expected to be achieved with the deployment of next generation 6G wireless communication networks. This thank to the adoption of key technologies such as unmanned aerial vehicles (UAVs), reflective intelligent surfaces (RIS), and mobile edge computing (MEC), which hold the potential to enhance coverage, signal quality, and computational efficiency. However, the integration of these technologies presents new optimization challenges, particularly for ensuring network reliability and maintaining stringent latency requirements. The Digital Twin (DT) paradigm, coupled with artificial intelligence (AI) and deep reinforcement learning (DRL), is emerging as a promising solution, enabling real-time optimization by digitally replicating network devices to support informed decision-making. This paper reviews recent advances in DT-enabled URLLC frameworks, highlights critical challenges, and suggests future research directions for realizing the full potential of 6G networks in supporting next-generation services under URLLCs requirements.
Article
Full-text available
With the development of telecommunication technologies and the proliferation of network applications in the past decades, the traditional cloud network architecture becomes unable to accommodate such demands due to the heavy burden on the backhaul links and long latency. Therefore, edge computing, which brings network functions close to end-users by providing caching, computing and communication resources at network edges, turns into a promising paradigm. Benefit from its nature, edge computing enables emerging scenarios and use cases like Augmented Reality (AR) and Internet of Things (IowT). However, it also creates complexities to efficiently orchestrate heterogeneous services and manage distributed resources in the edge network. In this survey, we make a comprehensive review of the research efforts on service orchestration and resource management for edge computing. We first give an overview of edge computing, including architectures, advantages, enabling technologies and standardization. Next, a comprehensive survey of state-of-the-art techniques in the management and orchestration of edge computing is presented. Subsequently, the state-of-the-art research on the infrastructure of edge computing is discussed in various aspects. Finally, open research challenges and future directions are presented as well.
Presentation
Full-text available
European Conference on Optical Communications, ECOC 2022 BASEL | Role of optical network for split computing between edge and cloud in support of ultra low latency services
Article
Full-text available
With files proactively stored at base stations (BSs), mobile edge caching enables direct content delivery without remote file fetching, which can reduce the end-to-end delay while relieving backhaul pressure. To effectively utilize the limited cache size in practice, cooperative caching can be leveraged to exploit caching diversity, by allowing users served by multiple base stations under the emerging user-centric network architecture. This paper explores delay-optimal cooperative edge caching in large-scale user-centric mobile networks, where the content placement and cluster size are optimized based on the stochastic information of network topology, traffic distribution, channel quality, and file popularity. Specifically, a greedy content placement algorithm is proposed based on the optimal bandwidth allocation, which can achieve (1-1/e)-optimality with linear computational complexity. In addition, the optimal user-centric cluster size is studied, and a condition constraining the maximal cluster size is presented in explicit form, which reflects the tradeoff between caching diversity and spectrum efficiency. Extensive simulations are conducted for analysis validation and performance evaluation. Numerical results demonstrate that the proposed greedy content placement algorithm can reduce the average file transmission delay up to 50% compared with the non-cooperative and hit-ratio-maximal schemes. Furthermore, the optimal clustering is also discussed considering the influences of different system parameters.
Article
Full-text available
Driven by the visions of Internet of Things and 5G communications, recent years have seen a paradigm shift in mobile computing, from the centralized Mobile Cloud Computing towards Mobile Edge Computing (MEC). The main feature of MEC is to push mobile computing, network control and storage to the network edges (e.g., base stations and access points) so as to enable computation-intensive and latency-critical applications at the resource-limited mobile devices. MEC promises dramatic reduction in latency and mobile energy consumption, tackling the key challenges for materializing 5G vision. The promised gains of MEC have motivated extensive efforts in both academia and industry on developing the technology. A main thrust of MEC research is to seamlessly merge the two disciplines of wireless communications and mobile computing, resulting in a wide-range of new designs ranging from techniques for computation offloading to network architectures. This paper provides a comprehensive survey of the state-of-the-art MEC research with a focus on joint radio-and-computational resource management. We also present a research outlook consisting of a set of promising directions for MEC research, including MEC system deployment, cache-enabled MEC, mobility management for MEC, green MEC, as well as privacy-aware MEC. Advancements in these directions will facilitate the transformation of MEC from theory to practice. Finally, we introduce recent standardization efforts on MEC as well as some typical MEC application scenarios.
Conference Paper
Emerging 5G mobile networks are envisioned to become multi-service environments, enabling the dynamic deployment of services with a diverse set of performance requirements, accommodating the needs of mobile network operators, verticals and over-the-top (OTT) service providers. Virtualizing the mobile network in a flexible way is of paramount importance for a cost-effective realization of this vision. While virtualization has been extensively studied in the case of the mobile core, virtualizing the radio access network (RAN) is still at its infancy. In this paper, we present Orion, a novel RAN slicing system that enables the dynamic on-the-fly virtualization of base stations, the flexible customization of slices to meet their respective service needs and which can be used in an end-to-end network slicing setting. Orion guarantees the functional and performance isolation of slices, while allowing for the efficient use of RAN resources among them. We present a concrete prototype implementation of Orion for LTE, with experimental results, considering alternative RAN slicing approaches, indicating its efficiency and highlighting its isolation capabilities. We also present an extension to Orion for accommodating the needs of OTT providers.
Conference Paper
Information-Centric Networking (ICN) is receiver driven, asynchronous and location-independent, hence it natively supports client-mobility. However, post-handover delay is a problem for delay-sensitive mobile applications, as they need to (re-)submit their subscriptions and wait for them to get resolved and (probably re-) transmitted before receiving the demanded data. To avoid this problem and optimize performance, this paper proposes a Mobility-based Proactive Multicast (MPM) scheme. Unlike reactive or blind multicast solutions proposed in the past, MPM takes autonomous decisions locally at various network access points (cells) prior to the movement of mobile clients, using a semi-Markov mobility prediction model that predicts next-cell transitions, along with anticipating the duration between the transitions for an arbitrary user in a cellular network. Since cellular backhaul links are typically a bottleneck, MPM trades-off effectively part of the capacity of the (congested) backhaul link for a decreased delay experienced by users after handovers thanks to a congestion pricing scheme used for backhaul capacity allocation. Our preliminary performance evaluation results show that MPM captures well the temporal locality of mobile requests due to the semi-Markov mobility prediction model, hence it achieves a better performance compared to both a (i) blind/naïve multicast and a (ii) content popularity-based proactive multicast.
Article
Knowing the variety of services and applications to be supported in the upcoming 5G systems, the current "one size fits all" network architecture is no more efficient. Indeed, each 5G service may have different needs in terms of latency, bandwidth, and reliability, which cannot be sustained by the same physical network infrastructure. In this context, network virtualization represents a viable way to provide a network slice tailored to each service. Several 5G initiatives (from industry and academia) have been pushing for solutions to enable network slicing in mobile networks, mainly based on SDN, NFV, and cloud computing as key enablers. The proposed architectures focus principally on the process of instantiating and deploying network slices, while ignoring how they are enforced in the mobile network. While several techniques of slicing the network infrastructure exist, slicing the RAN is still challenging. In this article, we propose a new framework to enforce network slices, featuring radio resources abstraction. The proposed framework is complementary to the ongoing solutions of network slicing, and fully compliant with the 3GPP vision. Indeed, our contributions are twofold: a fully programmable network slicing architecture based on the 3GPP DCN and a flexible RAN (i.e., programmable RAN) to enforce network slicing; a two-level MAC scheduler to abstract and share the physical resources among slices. Finally, a proof of concept on RAN slicing has been developed on top of OAI to derive key performance results, focusing on the flexibility and dynamicity of the proposed architecture to share the RAN resources among slices.
Article
Technological evolution of mobile user equipments (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. Suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud (CC). Nevertheless, this option introduces significant execution delay consisting in delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such delay is inconvenient and make the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: i) decision on computation offloading, ii) allocation of computing resource within the MEC, and iii) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.
Conference Paper
Although the radio access network (RAN) part of mobile networks offers a significant opportunity for benefiting from the use of SDN ideas, this opportunity is largely untapped due to the lack of a software-defined RAN (SD-RAN) platform. We fill this void with FlexRAN, a flexible and programmable SD-RAN platform that separates the RAN control and data planes through a new, custom-tailored southbound API. Aided by virtualized control functions and control delegation features, FlexRAN provides a flexible control plane designed with support for real-time RAN control applications, flexibility to realize various degrees of coordination among RAN infrastructure entities, and programmability to adapt control over time and easier evolution to the future following SDN/NFV principles. We implement FlexRAN as an extension to a modified version of the OpenAirInterface LTE platform, with evaluation results indicating the feasibility of using FlexRAN under the stringent time constraints posed by the RAN. To demonstrate the effectiveness of FlexRAN as an SD-RAN platform and highlight its applicability for a diverse set of use cases, we present three network services deployed over FlexRAN focusing on interference management, mobile edge computing and RAN sharing.
Article
We present an efficient mobility-based proactive caching model for addressing niche mobile demand, along with popularity-based and legacy caching model extensions. Opposite to other proactive solutions which focus on popular content, we propose a distributed solution that targets less popular, personalised or dynamic content requests by prefetching data in small cells based on aggregated user mobility prediction information. According to notable studies, niche demand, particularly for video content, represents a significant 20–40% of Internet demand and follows a growing trend. Due to its novel design, our model can directly address such demand, while also make a joint use of content popularity information with the novelty of dynamically tuning the contribution of mobility prediction and content popularity on local cache actions. Based on thorough performance evaluation simulations after exploring different demand levels, video catalogues and mobility scenarios including human walking and automobile mobility, we show that gains from mobility prediction can be high and able to adapt well to temporal locality due to the short timescale of measurements, exceeding cache gains from popularity-only caching up to 41% for low caching demand scenarios. Our model’s performance can be further improved at the cost of an added computational overhead by adapting cache replacements by, e.g. in the aforementioned scenarios, 41%. Also, we find that it is easier to benefit from requests popularity with low mobile caching demand and that mobility-based gains grow with popularity skewness, approaching close to the high and robust gains yielded with the model extensions.