Conference PaperPDF Available

MEC-RHA: Demonstration of Novel Service Request Handling Algorithm for MEC

Authors:

Abstract

Multi-Access Edge Computing (MEC) is a cloud computing evolution that delivers end-user services at the mobile network's edge. As a result, MEC guarantees that users will benefit from ultra-low latency and increased bandwidth when using the services. The orchestration process is the holistic management and control of the edge computing platforms. Handling of service requests forwarded by the MEC subscribers is an inceptive function that requires the intervention of the orchestrator. This paper demonstrates how an advanced service request handler algorithm (MEC-RHA) works on MEC orchestration , considering factors of service priority levels, feasibility, and resource availability when launching a service; while an optimal MEC server selection process is formed based on those factors.
MEC-RHA: Demonstration of Novel Service
Request Handling Algorithm for MEC
Gayan Dilanka, Lakshan Viranga, Rajitha Pamudith , Tharindu D. Gamage§, Pasika Ranaweera
Indika A. M. Balapuwaduge, Madhusanka Liyanage∗∗
∗† ‡§ ¶∥ Department of Electrical and Information Engineering, University of Ruhuna, Galle, Sri Lanka
¶∗∗School of Computer Science, University College Dublin, Ireland
∗∗Centre for Wireless Communications, University of Oulu, Finland
Email: gayandilankak@gmail.com, lakshanviranga0000@gmail.com, pamudithrajitha467@gmail.com, §tharindu@eie.ruh.ac.lk
pasika.ranaweera@ucdconnect.ie, mendis@eie.ruh.ac.lk, ∗∗madhusanka@ucd.ie, ∗∗madhusanka.liyanage@oulu.fi
Abstract—Multi-Access Edge Computing (MEC) is a cloud
computing evolution that delivers end-user services at the mobile
network’s edge. As a result, MEC guarantees that users will
benefit from ultra-low latency and increased bandwidth when
using the services. The orchestration process is the holistic
management and control of the edge computing platforms.
Handling of service requests forwarded by the MEC subscribers
is an inceptive function that requires the intervention of the
orchestrator. This paper demonstrates how an advanced service
request handler algorithm (MEC-RHA) works on MEC orches-
tration, considering factors of service priority levels, feasibility,
and resource availability when launching a service; while an
optimal MEC server selection process is formed based on those
factors.
Index Terms—Edge computing, MEC, Orchestration, Request
Handling, Virtualization
I. INTRODUCTION
The ever-increasing demand for information technology
(IT) based services has presented numerous challenges to
service providers. Service providers of the Internet and cloud
computing are primarily impacted by high latency and band-
width utilization scenarios. MEC [1] is the evolution of cloud
computing that brings services to the edge of the mobile
network to address the aforementioned drawbacks and provide
high quality services to end-users. With this new evolution,
the centralized cloud services become more decentralized
while reducing the bottlenecks in ingress traffic towards the
cloud servers. With the launch of 5thgeneration, there are
several merging use cases such as autonomous driving, Aug-
mented Reality (AR), and Virtual Reality (VR). To obtain
an uninterrupted and a real experience of these use cases,
cloud native solutions should feature the highest QoS and
QoE ratings. MEC is the best available option to provide
these requirements. Autonomous Orchestration is a principal
function that grants precedence for MEC over cloud computing
solutions. The function of orchestration [2] is to automatically
configure, manage, coordinate, and monitor the virtualized
service instances deployed in a virtual platform.
The orchestrator has the visibility across the holistic MEC
platform. As a result, it is easier to make automated decisions
based on the live status/ performance of the platform, partic-
ularly when deploying a service for a specific service request
from User Equipment (UE), and the orchestrator must decide
where to deploy the service. This decision-making process is
along with the service request handling is a novel strategy that
is imperative for launching MEC in the industrial scale.
In this demo, we demonstrate the feasibility of the pro-
posed request handler algorithm (MEC-RHA) with MEC
server selection through a prototype implementation. Section
II describes the system architecture, and section III presents
a prototype implementation of the MEC platform and, the
orchestrator. Section IV gives an overview of the showcase
we intend to present at the demo session, followed by section
V, which specifies a set of technical requirements.
II. SY ST EM ARCHITECTURE
Figure 1: Request-response path.
When a request is received from a UE, it is subjected to
several inspections to ensure that it is compatible with the se-
curity and feasibility requirements [3] . If the requirements are
met, the request will be routed to the MEC for provisioning;
otherwise, the request will be routed to the cloud.
Cloud
2b
Orchestrator Instruction to Service
initiation at MEC
Instruction to Service
initiation at cloud
Response from cloud
Edge
MEC Server 02
MEC Server 01
cAdvisor
Containers
cAdvisor
Containers
cAdvisor
Containers
MEC Server 03
1
Request
Response from MEC
3a
2a
3b
Monitoring
System
Figure 2: Overall Service Provisioning Functional Layout of the MEC-RHA.
As illustrated in Fig. 1, the request is sent from the UE,
and if the orchestrator decides to deliver the service from the
MEC, it chooses the best MEC server based on the Bit Error
Probability (BER) calculation of the UE accessing channel
[4], and feasibility and resource availability compatible MEC
servers. The orchestrator then instructs the MEC to allocate
the required infrastructure (bearing CPU, RAM, and Storage)
to the chosen MEC server, and the service is delivered there.
Fig. 2 depicts the overall portrayal of service provisioning,
with UEs that can be cars, mobile phones, or other Internet of
Things (IoT) devices. The orchestrator exists in the internal
network of the Internet service provider, and there may be
many MEC servers at the mobile network’s edge. The memory,
CPU, bandwidth, and other parameters are bound to the
user’s request. After the request arrives at the orchestrator,
it generates the response of connection establishment for the
user.
Our proposed MEC-RHAs most crucial element is the
orchestration process. Once the request from the user arrives
at the orchestrator, it requests the service using the parameters
mentioned earlier. As the initial step, the request is first placed
in a queue that is served following the service priority method.
Then, the orchestrator filter out the pertinent MEC servers
based on available infrastructure and resources, feasibility, and
BER corresponding to the user request parameters. In this case,
the most convenient MEC server is chosen to handle the user
request. The BER of the channel between UE and MEC server
influences the selection of the MEC server and the amount of
virtual infrastructure allocated at the selected MEC server.
III. PROTOTYPE SETUP
As a functional prototype, the proposed orchestration archi-
tecture was developed in a virtual environment. The prototype
virtual environment was deployed using VMware ESXi bare-
metal hypervisor, and related development applications and
infrastructure shown in Table I.Python based sockets were used
to connect each entity as shown in the Fig .3.
The docker containerization technology was [5] used for the
service deployment on MEC servers. Here the launched docker
container is the infrastructure element that runs the particular
service. We could use the resources optimally due to its light-
weight nature with docker. When a potential MEC service
is selected, it is launched as a docker container at the MEC
server, following the user’s specifications. Docker container is
created from a docker image. Most docker images are stored
in the MEC server’s local repository; otherwise, docker image
is migrated from the cloud if not found on the MEC server.
These images are typically obtained from the docker hub. The
services providers can create their service images and keep
them in the MEC servers’ local repository.
We created an orchestrator monitoring system using an
open-source docker container application called cAdviser.
Real-time monitoring of the individual MEC servers and
containers can be done using this tool, and CPU memory and
network throughput are measured as monitoring parameters.
Monitoring each instance enables the identification of inter-
dependencies across applications, services, processes, and
cloud components; allowing for a comprehensive understand-
ing of all edge server applications and the automatic syn-
chronization of activities. In addition, the built-in monitoring
strategy assures the edge server’s smooth in running with rapid
monitoring and troubleshooting of discrepancies induced by
applications..
IV. DEM ON ST RATION AND INTERACTION
The whole process as shown in Fig. 1 was presented in
this demonstration. The demo was divided into two parts. The
first part includes service request handling, service initiation,
and service provisioning from the MEC server to end devices.
After the service initiation process, provided services resource
usage and other parameters will be monitored.
For the demonstration, we created a service request with the
user requirements using a Python script, then ran the script on
user devices, and the user request followed the path shown in
Figure 3: Network connectivity of MEC demo environment.
Figure 4: Orchestrator live logs.
Fig. 1. We showed the live logs of each entity, MEC servers,
orchestrator, and cloud to illustrate the process. Sample live
logs of the orchestrator are demonstrated in the Fig. 4 . Thus,
the viewers will better understand the processes of each of
these entities. Finally, once the service has been deployed, we
can monitor it using the orchestrator monitoring tool, as shown
in Fig. 5.
A. Demonstration Interaction
Attendees at the demonstration will be able to interact with
the entire process of handling service requests that we have
proposed. We show the functionality of an IoT device in
particular, where IoT data is sent as a request and processed at
a container running on a MEC server. The attendee can see the
process of sending a request from an IoT device, processing
the request at orchestrator, launching a service container, and
responding to the IoT device here. Total orchestrator and
MEC environment can also be visualized using Graphical
Figure 5: Service Monitoring System.
User Interfaces (GUIs) to achieve a user-friendly view of the
architecture.
V. TECHNICAL REQUIREMENTS
The virtualization platform’s bare-metal hypervisor is used
as VMware ESXi version 6.5. Compared to other hypervisors,
this direct access to hardware performs smoother, and is
more scalable. 3 VMs are in our prototype implementation,
while Ubuntu 18.04 was configured. VMWare’s web client is
the most convenient method to manage our ESXi hosts. We
deployed docker version 19.03.12 on our VMs to implement
the edge computing platform. Furthermore, we employed
Python version 3 and the Python libraries to maintain the stable
connection and data exchange. In a coding context, notable
libraries include socket, docker, and haversine.
Table I: System specifications of the demo environment.
VMware ESXI version 6.5
Virtual Machines OS Ubuntu 18.04
Docker version 19.03.12
Python version 3
ACK NOW LE DG EM EN T
This work is partly supported by Academy of Finland in
6Genesis (grant no. 318927) project.
REFERENCES
[1] N. Slamnik-Krijeˇ
storac, E. Municio, H. C. Resende, S. A. Hadiwardoyo,
J. M. Marquez-Barja et al., “Network Service and Resource Orchestra-
tion: A Feature and Performance Analysis within the MEC-Enhanced
Vehicular Network Context,Sensors, vol. 20, no. 14, p. 3852, 2020.
[2] T. Taleb, K. Samdanis, B. Mada, H. Flinck, S. Dutta, and D. Sabella, “On
Multi-access Edge Computing: A Survey of the Emerging 5G Network
Edge Cloud Architecture and Orchestration,” IEEE Communications
Surveys & Tutorials, vol. 19, no. 3, pp. 1657–1681, 2017.
[3] G. Dilanka, L. Viranga, R. Pamudith, T. D. Gamage, P. Ranaweera, I. A.
Balapuwaduge, and M. Liyanage, “A novel request handler algorithm for
multi-access edge computing platforms in 5g.”
[4] G. Dilanka, L. Viranga, R. Pamudith, T. D. Gamage, P. S. Ranaweera,
I. A. Balapuwaduge, and M. Liyanage, “A novel server selection strategy
for multi-access edge computing.”
[5] B. I. Ismail, E. M. Goortani, M. B. Ab Karim, W. M. Tat, S. Setapa,
J. Y. Luke, and O. H. Hoe, “Evaluation of Docker as Edge Computing
Platform,” in 2015 IEEE Conference on Open Systems (ICOS). IEEE,
2015, pp. 130–135.
... Lots of efforts have been taken in recent years to reduce energy consumption and latency of edge servers in order to enhance the efficiency and performance of edge computing systems [22][23][24][25][26][27][28][29][30][31]. Huang et al. [22] proposed a cloud-edge collaborative task offloading scheme with service orchestration to meet the needs of low-latency, high-complexity, as well as high-reliability of the emerging applications. ...
Article
Full-text available
With the dawn of Industry 5.0 upon us, the smart factory emerges as a pivotal element, playing a crucial role in the realm of intelligent manufacturing. Meanwhile, mobile edge computing is proposed to alleviate the computational burden presented by substantial workloads in smart factories. Nonetheless, it is very challenging to effectively incorporate edge computing resources to improve the efficiency of resource deployment in smart factories. Accordingly, we devise a novel approach based on Proximal Policy Optimization algorithm with the Self-Attention Mechanism to implement computing resource allocation in MEC-Empowered Smart Factories. More specifically, the self-attention mechanism is incorporated to enable dynamic focus on state information, accelerates convergence and facilitates global control. A great number of experiments conducted on both simulated and real datasets have verified the superiority of our proposed approach compared to the state-of-the-art baselines.
Conference Paper
Full-text available
Multi-access Edge Computing (MEC) is envisaging a storage and processing infrastructure at the edge of the mobile network to guarantee ultra-low latency and higher bandwidths for the provisioning services emanated by Internet of Things (IoT) devices. To achieve these dynamic requirements, MEC is adopting virtualization technologies that form a cost effective automated infrastructure ideal for 5G and beyond networks. Orchestration is the paramount task of such virtual platforms to manage and control the virtual entities autonomously. Service request handling is one such key orchestration functions that handles the incoming requests to the orchestrator in case of a service initiation. However, existing service request handling procedures in MEC are still in trivial stage. Thus, this paper proposes an advanced service request handling strategy for MEC orchestrator which can consider several factors such as service priority levels, feasibility and resource availability. The performance of proposed strategy is analyzed in a simulated environment and viability of proposed strategy is demontrated by using a prototype MEC implementations.
Conference Paper
Full-text available
To realize the 5G mobile technology evolution, MEC provides emerging support. Bringing cloud services to the mobile network end-user creates ultra-low latency and higher bandwidths for the mobile network. As a result, IoT domain devices can dynamically meet the running service's dynamic requirements. Virtualization enhances edge computing by reducing the number of physical resources required. With this knowledge, automated infrastructure 5G deployment is more cost-effective. Virtualization technologies should be used in the IoT space to fulfil the edge computing paradigms' goal of creating computing infrastructure at the network's edge. With virtualization, 5G server platforms will be quickly and affordably accessible for eNodeBs. Virtualized edge computing server platforms are managed and controlled using orchestration. Even though cloud-based orchestration platforms exist, the autonomous infrastructure scaling required for edge computing is not a reality. The orchestrator allows the selection of the best MEC server to begin a service. This paper proposes a mobile edge MEC server selection strategy, dependent on the MEC server's resources and link parameters. The strategy for choosing MEC servers proposed was simulated and tested for performance based on different aspects.
Article
Full-text available
By providing storage and computational resources at the network edge, which enables hosting applications closer to the mobile users, Multi-Access Edge Computing (MEC) utilizes the mobile backhaul, and the network core more efficiently, thereby reducing the overall latency. Fostering the synergy between 5G and MEC brings ultra-reliable low-latency in data transmission, and paves the way towards numerous latency-sensitive automotive use cases, with the ultimate goal of enabling autonomous driving. Despite the benefits of significant latency reduction, bringing MEC platforms into 5G-based vehicular networks imposes severe challenges towards poorly scalable network management, as MEC platforms usually represent a highly heterogeneous environment. Therefore, there is a strong need to perform network management and orchestration in an automated way, which, being supported by Software Defined Networking (SDN) and Network Function Virtualization(NFV), will further decrease the latency. With recent advances in SDN, along with NFV, which aim to facilitate management automation for tackling delay issues in vehicular communications, we study the closed-loop life-cycle management of network services, and map such cycle to the Management and Orchestration (MANO) systems, such as ETSI NFV MANO. In this paper, we provide a comprehensive overview of existing MANO solutions, studying their most important features to enable network service and resource orchestration in MEC-enhanced vehicular networks. Finally, using a real testbed setup, we conduct and present an extensive performance analysis of OpenBaton and Open Source MANO that are, due to their lightweight resource footprint, and compliance to ETSI standards, suitable solutions for resource and service management and orchestration within the network edge.
Conference Paper
Full-text available
High latency, network congestion and network bottleneck are some of problems in cloud computing. Moving from centralized to decentralized paradigm, Edge computing could offload the processing to the edge which indirectly reduces application response time and improves overall user experience. This paper evaluate Docker, a container based technology as a platform for Edge Computing. 4 fundamental criteria were evaluated 1) deployment and termination, 2) resource & service management, 3) fault tolerance and 4) caching. Based on our evaluation and experiment Docker provides fast deployment, small footprint and good performance which make it potentially a viable Edge Computing platform.
Article
Multi-access Edge Computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the Radio Access Network (RAN). MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks. This paper introduces a survey on MEC and focuses on the fundamental key enabling technologies. It elaborates MEC orchestration considering both individual services and a network of MEC platforms supporting mobility, bringing light into the different orchestration deployment options. In addition, this paper analyzes the MEC reference architecture and main deployment scenarios, which offer multi-tenancy support for application developers, content providers and third parties. Finally, this paper overviews the current standardization activities and elaborates further on open research challenges.