Content uploaded by Madhusanka Liyanage
Author content
All content in this area was uploaded by Madhusanka Liyanage on Mar 10, 2022
Content may be subject to copyright.
MEC-RHA: Demonstration of Novel Service
Request Handling Algorithm for MEC
Gayan Dilanka∗, Lakshan Viranga†, Rajitha Pamudith ‡, Tharindu D. Gamage§, Pasika Ranaweera¶
Indika A. M. Balapuwaduge∥, Madhusanka Liyanage∗∗
∗† ‡§ ¶∥ Department of Electrical and Information Engineering, University of Ruhuna, Galle, Sri Lanka
¶∗∗School of Computer Science, University College Dublin, Ireland
∗∗Centre for Wireless Communications, University of Oulu, Finland
Email: ∗gayandilankak@gmail.com, †lakshanviranga0000@gmail.com, ‡pamudithrajitha467@gmail.com, §tharindu@eie.ruh.ac.lk
¶pasika.ranaweera@ucdconnect.ie, ∥mendis@eie.ruh.ac.lk, ∗∗madhusanka@ucd.ie, ∗∗madhusanka.liyanage@oulu.fi
Abstract—Multi-Access Edge Computing (MEC) is a cloud
computing evolution that delivers end-user services at the mobile
network’s edge. As a result, MEC guarantees that users will
benefit from ultra-low latency and increased bandwidth when
using the services. The orchestration process is the holistic
management and control of the edge computing platforms.
Handling of service requests forwarded by the MEC subscribers
is an inceptive function that requires the intervention of the
orchestrator. This paper demonstrates how an advanced service
request handler algorithm (MEC-RHA) works on MEC orches-
tration, considering factors of service priority levels, feasibility,
and resource availability when launching a service; while an
optimal MEC server selection process is formed based on those
factors.
Index Terms—Edge computing, MEC, Orchestration, Request
Handling, Virtualization
I. INTRODUCTION
The ever-increasing demand for information technology
(IT) based services has presented numerous challenges to
service providers. Service providers of the Internet and cloud
computing are primarily impacted by high latency and band-
width utilization scenarios. MEC [1] is the evolution of cloud
computing that brings services to the edge of the mobile
network to address the aforementioned drawbacks and provide
high quality services to end-users. With this new evolution,
the centralized cloud services become more decentralized
while reducing the bottlenecks in ingress traffic towards the
cloud servers. With the launch of 5thgeneration, there are
several merging use cases such as autonomous driving, Aug-
mented Reality (AR), and Virtual Reality (VR). To obtain
an uninterrupted and a real experience of these use cases,
cloud native solutions should feature the highest QoS and
QoE ratings. MEC is the best available option to provide
these requirements. Autonomous Orchestration is a principal
function that grants precedence for MEC over cloud computing
solutions. The function of orchestration [2] is to automatically
configure, manage, coordinate, and monitor the virtualized
service instances deployed in a virtual platform.
The orchestrator has the visibility across the holistic MEC
platform. As a result, it is easier to make automated decisions
based on the live status/ performance of the platform, partic-
ularly when deploying a service for a specific service request
from User Equipment (UE), and the orchestrator must decide
where to deploy the service. This decision-making process is
along with the service request handling is a novel strategy that
is imperative for launching MEC in the industrial scale.
In this demo, we demonstrate the feasibility of the pro-
posed request handler algorithm (MEC-RHA) with MEC
server selection through a prototype implementation. Section
II describes the system architecture, and section III presents
a prototype implementation of the MEC platform and, the
orchestrator. Section IV gives an overview of the showcase
we intend to present at the demo session, followed by section
V, which specifies a set of technical requirements.
II. SY ST EM ARCHITECTURE
Figure 1: Request-response path.
When a request is received from a UE, it is subjected to
several inspections to ensure that it is compatible with the se-
curity and feasibility requirements [3] . If the requirements are
met, the request will be routed to the MEC for provisioning;
otherwise, the request will be routed to the cloud.
Cloud
2b
Orchestrator Instruction to Service
initiation at MEC
Instruction to Service
initiation at cloud
Response from cloud
Edge
MEC Server 02
MEC Server 01
cAdvisor
Containers
cAdvisor
Containers
cAdvisor
Containers
MEC Server 03
1
Request
Response from MEC
3a
2a
3b
Monitoring
System
Figure 2: Overall Service Provisioning Functional Layout of the MEC-RHA.
As illustrated in Fig. 1, the request is sent from the UE,
and if the orchestrator decides to deliver the service from the
MEC, it chooses the best MEC server based on the Bit Error
Probability (BER) calculation of the UE accessing channel
[4], and feasibility and resource availability compatible MEC
servers. The orchestrator then instructs the MEC to allocate
the required infrastructure (bearing CPU, RAM, and Storage)
to the chosen MEC server, and the service is delivered there.
Fig. 2 depicts the overall portrayal of service provisioning,
with UEs that can be cars, mobile phones, or other Internet of
Things (IoT) devices. The orchestrator exists in the internal
network of the Internet service provider, and there may be
many MEC servers at the mobile network’s edge. The memory,
CPU, bandwidth, and other parameters are bound to the
user’s request. After the request arrives at the orchestrator,
it generates the response of connection establishment for the
user.
Our proposed MEC-RHAs most crucial element is the
orchestration process. Once the request from the user arrives
at the orchestrator, it requests the service using the parameters
mentioned earlier. As the initial step, the request is first placed
in a queue that is served following the service priority method.
Then, the orchestrator filter out the pertinent MEC servers
based on available infrastructure and resources, feasibility, and
BER corresponding to the user request parameters. In this case,
the most convenient MEC server is chosen to handle the user
request. The BER of the channel between UE and MEC server
influences the selection of the MEC server and the amount of
virtual infrastructure allocated at the selected MEC server.
III. PROTOTYPE SETUP
As a functional prototype, the proposed orchestration archi-
tecture was developed in a virtual environment. The prototype
virtual environment was deployed using VMware ESXi bare-
metal hypervisor, and related development applications and
infrastructure shown in Table I.Python based sockets were used
to connect each entity as shown in the Fig .3.
The docker containerization technology was [5] used for the
service deployment on MEC servers. Here the launched docker
container is the infrastructure element that runs the particular
service. We could use the resources optimally due to its light-
weight nature with docker. When a potential MEC service
is selected, it is launched as a docker container at the MEC
server, following the user’s specifications. Docker container is
created from a docker image. Most docker images are stored
in the MEC server’s local repository; otherwise, docker image
is migrated from the cloud if not found on the MEC server.
These images are typically obtained from the docker hub. The
services providers can create their service images and keep
them in the MEC servers’ local repository.
We created an orchestrator monitoring system using an
open-source docker container application called cAdviser.
Real-time monitoring of the individual MEC servers and
containers can be done using this tool, and CPU memory and
network throughput are measured as monitoring parameters.
Monitoring each instance enables the identification of inter-
dependencies across applications, services, processes, and
cloud components; allowing for a comprehensive understand-
ing of all edge server applications and the automatic syn-
chronization of activities. In addition, the built-in monitoring
strategy assures the edge server’s smooth in running with rapid
monitoring and troubleshooting of discrepancies induced by
applications..
IV. DEM ON ST RATION AND INTERACTION
The whole process as shown in Fig. 1 was presented in
this demonstration. The demo was divided into two parts. The
first part includes service request handling, service initiation,
and service provisioning from the MEC server to end devices.
After the service initiation process, provided services resource
usage and other parameters will be monitored.
For the demonstration, we created a service request with the
user requirements using a Python script, then ran the script on
user devices, and the user request followed the path shown in
Figure 3: Network connectivity of MEC demo environment.
Figure 4: Orchestrator live logs.
Fig. 1. We showed the live logs of each entity, MEC servers,
orchestrator, and cloud to illustrate the process. Sample live
logs of the orchestrator are demonstrated in the Fig. 4 . Thus,
the viewers will better understand the processes of each of
these entities. Finally, once the service has been deployed, we
can monitor it using the orchestrator monitoring tool, as shown
in Fig. 5.
A. Demonstration Interaction
Attendees at the demonstration will be able to interact with
the entire process of handling service requests that we have
proposed. We show the functionality of an IoT device in
particular, where IoT data is sent as a request and processed at
a container running on a MEC server. The attendee can see the
process of sending a request from an IoT device, processing
the request at orchestrator, launching a service container, and
responding to the IoT device here. Total orchestrator and
MEC environment can also be visualized using Graphical
Figure 5: Service Monitoring System.
User Interfaces (GUIs) to achieve a user-friendly view of the
architecture.
V. TECHNICAL REQUIREMENTS
The virtualization platform’s bare-metal hypervisor is used
as VMware ESXi version 6.5. Compared to other hypervisors,
this direct access to hardware performs smoother, and is
more scalable. 3 VMs are in our prototype implementation,
while Ubuntu 18.04 was configured. VMWare’s web client is
the most convenient method to manage our ESXi hosts. We
deployed docker version 19.03.12 on our VMs to implement
the edge computing platform. Furthermore, we employed
Python version 3 and the Python libraries to maintain the stable
connection and data exchange. In a coding context, notable
libraries include socket, docker, and haversine.
Table I: System specifications of the demo environment.
VMware ESXI version 6.5
Virtual Machines OS Ubuntu 18.04
Docker version 19.03.12
Python version 3
ACK NOW LE DG EM EN T
This work is partly supported by Academy of Finland in
6Genesis (grant no. 318927) project.
REFERENCES
[1] N. Slamnik-Krijeˇ
storac, E. Municio, H. C. Resende, S. A. Hadiwardoyo,
J. M. Marquez-Barja et al., “Network Service and Resource Orchestra-
tion: A Feature and Performance Analysis within the MEC-Enhanced
Vehicular Network Context,” Sensors, vol. 20, no. 14, p. 3852, 2020.
[2] T. Taleb, K. Samdanis, B. Mada, H. Flinck, S. Dutta, and D. Sabella, “On
Multi-access Edge Computing: A Survey of the Emerging 5G Network
Edge Cloud Architecture and Orchestration,” IEEE Communications
Surveys & Tutorials, vol. 19, no. 3, pp. 1657–1681, 2017.
[3] G. Dilanka, L. Viranga, R. Pamudith, T. D. Gamage, P. Ranaweera, I. A.
Balapuwaduge, and M. Liyanage, “A novel request handler algorithm for
multi-access edge computing platforms in 5g.”
[4] G. Dilanka, L. Viranga, R. Pamudith, T. D. Gamage, P. S. Ranaweera,
I. A. Balapuwaduge, and M. Liyanage, “A novel server selection strategy
for multi-access edge computing.”
[5] B. I. Ismail, E. M. Goortani, M. B. Ab Karim, W. M. Tat, S. Setapa,
J. Y. Luke, and O. H. Hoe, “Evaluation of Docker as Edge Computing
Platform,” in 2015 IEEE Conference on Open Systems (ICOS). IEEE,
2015, pp. 130–135.