Conference PaperPDF Available

Performance Guaranteed Partial Offloading for Mobile Edge Computing



Content may be subject to copyright.
Performance Guaranteed Partial Offloading for
Mobile Edge Computing
Umber Saleem, Yu Liu, Sobia Jangsher, Yong Li
Beijing National Research Center for Information Science and Technology (BNRist),
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
Department of Electrical Engineering, Institute of Space Technology (IST), Islamabad 44000, Pakistan
Abstract—In this paper, we jointly consider partial offloading
and resource allocation to minimize the sum latency with energy
efficiency for multi-user mobile-edge computation offloading
(MECO) system based on orthogonal frequency-division mul-
tiple access (OFDMA). We formulate mixed integer non-linear
programming (MINLP) sum latency minimization problem con-
sidering the edge execution delay, desired energy consumption for
local computation, OFDMA, QoS, transmission power in uplink
and edge computation capacity constraints. We propose that a
user can make use of multi-channel transmissions to reduce
the transmission delay for task with large data size. We first
derive an expression to determine optimal offloading fraction
such that edge computing delay is less than the local execution
delay and energy consumed for local execution does not exceed
the desired limit. Then, we transform the original problem into
communication and computation resource allocation problem
and propose a suboptimal low complexity algorithm to find
the resource allocation. The simulation results show that the
proposed scheme achieves 17% and 25% better performance
than random and complete offloading schemes, respectively.
With the technological evolution of smart phones and
Internet of Things (IoT), new applications such as online
gaming, image/video editing, face/speech recognition, aug-
mented reality, etc. are emerging rapidly. These consumer
oriented services demand for real time communication and
intensive computations. However, the explosive growth of
mobile data traffic and finite computation resources of devices
pose significant challenges to realize the millisecond-scale
latency requirement in 5G network [1].
Mobile-edge computing (MEC) is seen as a promising
paradigm which provides cloud services close to the mobile
edge. It enables the mobile users to offload their computation
intensive tasks to the edge server, referred to as mobile-edge
computation offloading (MECO)[2]. In order to minimize the
energy consumption and the latency for MECO, the commu-
nication and computation resources require to be optimally
allocated among the users and the edge. Hence, designing
effective computation offloading schemes have attracted huge
attention. Most of the research works focus on energy efficient
resource allocation for single user and multi-user MECO, and
consider latency as a constraint [3],[4],[5].
There are few works which investigate latency minimization
problem for single user [6], [7] and multi-user [8], [9] MECO
systems. In [6], a power constrained delay minimization
problem was formulated based on average delay of each task
and average power consumption of the mobile device. A one
dimensional search algorithm was proposed to find the optimal
stochastic computation offloading policy. On the other hand,
execution cost as a function of execution latency and task
failure was the performance metric in [7] for green MEC.
A dynamic computation offloading policy based on Lyapunov
optimization was proposed which reduced the execution delay
and task failure at the cost of execution delay performance
degradation. In [8], the allocation of the communication and
remote computational resources in uplink and downlink was
investigated to minimize the average latency of the worst case
user, while saving energy as compared to the local execution.
Power consumption minimization problem was formulated
to investigate the tradeoff between power consumption and
task execution delay in [9]. An algorithm based on Lya-
punov optimization was devised to achieve the objective by
effectively allocating transmit power, bandwidth and local
execution frequency.
It is important to note that the aforementioned works
considered complete offloading, while partial offloading can
significantly improve the latency as the network becomes
dense and the edge resources are limited. In a recent work
[10], partial offloading for weighted sum latency minimization
was investigated by optimally allocating the communication
and computation resources. However, the fundamental energy
constraint of the devices is ignored and the data segmentation
strategy is derived irrelevant of the resulting local execution
cost. In order to improve the performance for practical sce-
narios for MECO, there is need to jointly address the aspects
of partial offloading, latency minimization, energy efficiency
and resource allocation.
In this paper, we address the sum latency minimization
problem with partial offloading for multi-user orthogonal
frequency-division multiple access (OFDMA) MECO system.
We assume a client server model, where the base station is
the resourceful MEC server with finite computation capacity
and users have limited computation resources. Each user has
a computation intensive task to perform where the data size of
each task is assumed to be large [11]. We formulate mixed in-
teger non-linear programming (MINLP) optimization problem
Fig. 1. Partial offloading Scenario.
with objective to minimize the sum latency of all users under
the expected energy consumption, edge computation latency,
communication and computation resources constraints. First,
an optimal offloading fraction based on the local energy
consumption and edge computation latency is derived for
each user. The original problem is then decomposed and a
centralized low complexity suboptimal communication and
computation resource allocation algorithm is proposed to
decide the partial offloading policy. Performance analysis
shows that the proposed solution has promising performance.
Moreover, the comparison shows that the proposed scheme
outperforms some baseline schemes including random offload-
ing and complete offloading.
The rest of the paper is organized as follows. Section II,
presents the system model and discusses the communication
and partial offloading model in detail. In Section III, we
formulate the MINLP sum latency minimization problem for
multiple users. Section IV discusses the proposed solution
and a suboptimal communication and computation resource
allocation algorithm. Section V presents simulation results and
the conclusion provided in Section VI.
We consider a multiuser MECO system with the BS as the
finite capacity edge server and denote M={1,2, ..., M }
as a set of mobile users. Each user has a delay sensitive
computationally intensive task to be executed, while the user’s
computation resource is limited. A user partially offloads its
task to the BS for remote execution through a wireless channel
and executes the rest of the task locally. Thus, the total task
computation latency for a user is the sum of the offloading,
edge computation and local computation delays as shown in
Fig. 1. The BS is assumed to have a perfect knowledge of the
multiuser channel gains, size of input computation data, local
computing energy per bit and expected energy cost of the local
computation. Based on this information, the BS determines the
amount of data to be offloaded at each user, assigns subcarriers
and allocates power to all the users with aim to minimize
the offloading and edge computation latency. We ignore the
downloading latency for our problem keeping in view that
computation results have relatively smaller sizes [12].
A. Partial Offloading Model
Each user m M has a computation task denoted as Tm
(Dm, cm), where Dmdenotes the data size of the task in bits
and cmdenotes CPU cycles required for computing one bit at
user m. For an optimal offloading decision, we assume that a
user mcan offload a fraction αm[0,1] of its computation
data, hence the offloaded data is given by Doff
In the following discussion, we introduce the local computing
model, communication model and edge computing model.
1) Local Execution Model: For each user we define a
desired energy consumption value ϵm, from which we can
determine an energy baseline to offload at the edge. Therefore,
the offloading fraction should be decided according to the
expected energy consumption. We assume that each user has
a fixed CPU frequency, which may vary over different users.
Let ωmdenote the energy consumption per cycle for local
computing at user m. Then ωmcmgives the computing energy
per bit. After offloading Doff
mbits, user mneeds to compute
(1 αm)Dmbits locally. Then the energy consumption for
local computing at user mis given by
m=ωmcm(1 αm)Dm.(1)
Let Fmdenote the computation capacity of user mmea-
sured in CPU cycles per second. Then the local execution
latency can be obtained as
m=cm(1 αm)Dm
2) Communication Model: Here, we discuss the communi-
cation model and the cost of computation offloading process.
We assume an OFDMA system where the total bandwidth
Bis divided in Northogonal subcarriers and their set is
denoted as N={1,2, ..., N }. A single subcarrier can be
assigned to one user at a particular instant, hence there will
be no interference. Moreover, a user can transmit on more
than one subcarrier. We define ρn
m {0,1}and ρas the
subcarrier assignment parameter and subcarrier assignment
matrix, respectively. ρn
m= 1 indicates that a user m M is
assigned the subcarrier n N , and verse vice. The subcarrier
assignment matrix is denoted as ρ. We assume Rayleigh
fading channel and the channel gain for user mon subcarrier
nis denoted as hn
mcorresponding to a white Gaussian noise
channel which incorporates distance based path loss model.
The transmission power of user mon subcarrier nis
denoted as pn
mand the total transmission power of a user
is bounded by Pmax
m. The power allocation matrix is denoted
as p. The maximum achievable data rate rn
mof a user mon
subcarrier nis given as
m=Wlog2(1 + pn
where N0denotes the power spectral density of white Gaus-
sian channel noise and Wis the bandwidth of each subcarrier.
Accordingly, the data rate of user mis
mWlog2(1 + pn
In order to guarantee the reduction in communication cost,
we consider the QoS constraint of each user corresponding
to its computation data size. Hence, we assume that the data
rate of a user should be greater than a minimum threshold of
m. Consequently, the number of subcarriers assigned to
each user is bounded such that QoS of all the users is met at
least with equality.
Let Nmdenote the total number of subcarriers assigned to
a user m. For simplicity, we assume that the offloaded data
mby user mis uniformly distributed over its assigned
subcarriers. Thus the data offloaded by user mon its subcar-
rier nis given by dn
m=αmDm/Nm. Due to multi-channel
transmission, the offloading latency Lmcan be determined by
the transmission delay of worst channel and is expressed as
m= max(ρn
Whereas, the energy consumed while offloading a task can
be expressed in terms of task size, transmission power and
transmission rate as
3) MEC Server Execution Model: We assume that the
BS has finite computation capacity Fexpressed in number
of CPU cycles per second. Let Fe
mdenote the computation
resource assigned to user m. Then the edge execution latency
is given as
Due to the finite computation capacity of the edge
server, a feasible computation resource allocation must follow
mF. Moreover, we assume that the total time
consumption in case of offloading must be less than the time
when computation task is executed locally [13].
In this section, we formulate the resource allocation for
partial offloading multiuser MECO as an optimization prob-
lem. A user offloads fraction of task to the edge server and
computes the remaining task locally after downloading the
results from the edge. The delay for execution includes the
transmission time over the channel, remote execution time
and local execution time. Thus, our objective is to minimize
the sum latency:
m). The joint latency
minimization and energy efficiency partial offloading problem
can be formulated as
m=1 max(ρn
) + cmαmDm
+cm(1 αm)Dm
s.t. 0αm1,m M (8b)
m,m M (8c)
mϵm,m M (8d)
m {0,1},
m= 1,n N (8e)
m1,m M (8f)
m,m M (8g)
m,m M (8h)
mF. m M (8i)
Here, (8a) shows our objective function which is sum of the
offloading, edge computation and local computation latency
of all the users. Constraint in (8b) presents the limits on the
fraction of data to be offloaded by every user. Constraint (8c)
ensures that offloading and edge execution together require
less time than the time required for local execution. Constraint
(8d) implies that the energy cost of offloading must not exceed
the expected energy consumption of a user to ensure that
the offloading is energy efficient. Constraints (8e) and (8f)
bound the communication resources allocation, where (8e)
shows the exclusive channel allocation due to OFDMA and
(8f) shows that a user should be allocated at least one or
more subcarriers. Constraint (8g) ensures that the sum data
rate of a user must be greater than a minimum threshold
to guarantee QoS. Constraint (8h) shows the bound on total
transmission power of a user in uplink. Constraint (8i) shows
feasible computation resource allocation at the edge server
and means that the computation resources are allocated to
the offloading users within the computation capacity of edge
It can be observed that the objective function in (8a) is a
MINLP problem. The binary assignment variable ρmresults
in non-convex feasible set and the non-linear constraints (8c),
(8d), (8g) and (8h) make the objective function in (8a) non-
convex due to product of the binary and continuous terms.
Hence, our problem is a mixed discrete and non-convex
optimization problem, which renders the problem NP-hard
In this section we derive an expression for the offloading
fraction and then transform the original problem into resource
allocation problem. We then propose a centralized low com-
plexity algorithm to allocate the communication resources for
reducing the offloading latency and computation resources at
the edge to reduce the edge computation latency.
A. Optimal Offloading Fraction
The optimal data segmentation strategy in the proposed
problem is influenced by two assumptions. Firstly, the data of-
floaded should not require the energy consumption more than
the desired value at each user. Secondly, the offloaded fraction
should improve the offloading performance as compared to the
local execution. Therefore, we derive an expression for αm
based on constraints (8b), (8c) and (8d) as
m+ (Fm+Fe
Here, rmin
mdenotes the maximum achievable data rate among
all the subcarriers of a user and it corresponds to the maximum
value of offloading latency.
B. Optimal Resource Allocation
After obtaining the expression for αm, the original problem
is transformed into latency minimization problem by optimal
communication and computation resource allocation and pre-
sented as
m=1 max(ρn
) + cmα
+cm(1 α
s.t. (8e),(8f),(8g),(8h),and (8i).(10b)
The problem in (10a) still has non-convex objective func-
tion and non-linear constraints due to the binary variable
m, which makes the problem intractable and a global op-
timum solution is difficult to obtain. Therefore, we propose a
centralized low complexity algorithm for communication and
computation resource allocation with aim to minimize the sum
C. Communication and Computation Resource Allocation
Here, we propose Algorithm 1 by decomposing our problem
into two parts. First we assign subcarriers and allocate power
to all the users keeping in view the fact that maximizing the
data rate per subcarrier would result in minimum offloading
latency. After allocating the communication resources, we
allocate computation resources according to the computation
capacity of the edge server.
In the first iteration, a single subcarrier is assigned to
each user. From the marginal rate function with respect to
subcarrier, to maximize the data rate a subcarrier nshould
be assigned to a user msuch that m= arg max
Therefore, we select the subcarrier and user pair such that
the ratio between the data rate of user over the average data
rate on that subcarrier is maximized. As there is only one
subcarrier per user, we allocate maximum power to all the
subcarriers. Next, we assign the remaining subcarriers such
that a user with the smallest Rm/Rmin
mvalue will be optimally
assigned a subcarrier. Each time a subcarrier assignment is
performed, the power allocation is also updated and the sum
rate is calculated for all the users. We perform uniform power
allocation for a user motivated by the fact that each subcarrier
carries same amount of data. The subsequent iterations aim to
improve the sum rate of all the users which leads to reduced
offloading delay. Finally, the edge computation resources are
equally distributed among all the users as the edge execu-
tion latency is already taken care of while determining the
offloading fraction.
The proposed algorithm has Miterations in first step of
initial subcarrier assignment and Niterations for assigning
the remaining subcarriers. Therefore, assuming N >> M the
complexity of Algorithm 1 can be expressed as O|N |, which
shows that it achieves lower computation complexity.
In this section, we evaluate the performance of our proposed
scheme by analyzing the numerical results and comparing
with other baseline schemes, namely complete offloading and
random offloading as there is no other existing scheme which
considers latency minimization and guarantees energy effi-
ciency at the same time for partial offloading. The simulation
parameters are as follows unless stated otherwise. The BS
has radius of 500 m and there are 35 users randomly located
Algorithm 1 Joint resource allocation scheme
1: Input: N,M, Channel gain matrix H,F
2: Output: Subcarrier assignment matrix ρ, Power alloca-
tion matrix p.
3: Initialize: U, U =M, S =N, Rm= 0 mM.
4: while U=ϕdo
5: Find
(m, n) = arg max
6: ρ(m, n)= 1,U=U {m},S=S {n}
7: p(m, n)=Pmax
mand Rm(m) = rn
8: end while
9: while S=ϕdo
10: Find
m= arg min
11: Find
n= arg max
12: ρ(m, n)= 1 and S=S {n}
13: For (m, n)calculate pn
mby Pmax
14: Update Rm(m) = Rm(m) + rn
mfrom updated p
15: end while
16: for each user min Udo
17: Fe
18: end for
5 10 15 20 25 30 35
Average Latency (msec)
Proposed Scheme
Random Offloading Scheme
Complete Offloading
Fig. 2. Effect of number of users on sum latency of the system for fixed
number of subcarriers.
in the network. The total bandwidth Bis 20 MHz, which
is divided into N= 64 orthogonal subcarriers. The channel
gain hn
mis modeled as independent Rayleigh fading channel
which incorporates the path loss and shadowing effects [15].
The noise power is set as N0=100 dB. For each task,
the data size and required CPU cycles per bit are uniformly
distributed as Dm[100,500] KB and cm[500,1500]
cycles/bit, while Rmin
mis chosen randomly from the range
{100,200}KB/s based on the data size of each user’s task.
The expected energy consumption for each user is randomly
chosen from {1,1.5,2}J. The local computation capacity Fm
and local computation energy per cycle ωmfollow uniform
distribution between [0.1,1.0] GHz and [0.5,2x1010]J/cycle,
respectively. We model computing capabilities of the users as
independent variables. Last, the finite edge capacity is set as
F= 10x109GHz.
We first compare the performance of proposed scheme with
random offloading which randomly segments the computation
data for offloading and complete offloading which offloads
the complete data for remote execution. Fig. 2 shows the
average latency versus number of users in the three different
cases. The average latency increases with the number of users
for all the three schemes as the communication and edge
computation resources become scarce. For small number of
users, partial offloading scheme performs better than random
offloading, while complete offloading achieves the minimum
latency due to sufficient computation resources at the edge.
However, partial offloading outperforms the other schemes
as the number of users increases in the network. This can
be explained by the fact that our proposed scheme allows
multi-channel transmission and jointly considers the com-
munication and computation resources allocation resulting in
reduced communication and edge execution latency. Hence,
it is evident that the proposed scheme is effective for delay
sensitive tasks especially when the network becomes dense.
In Fig. 3, we compare the performance of the three schemes
from energy consumption perspective. It can be seen that as
the task size increases, the energy consumption also increases
for all the three cases due to limited communication and
computation resources, while the proposed scheme consumes
0.5 1 1.5 2 2.5
Task Size (MB)
Average Energy Consumption (J)
Proposed Scheme
Random Offloading Scheme
Complete Offloading
Fig. 3. Effect of task size on local energy consumption.
less energy as compared to random and complete offloading.
The total energy consumed is due to the local execution and
then data transmission over the channel. In our scheme, we
decide the optimal offloading fraction based on the expected
energy cost and edge execution latency due to which less
energy is consumed locally. Moreover, the optimized sub-
carrier and power allocation reduces the energy consumption
for communication. Complete offloading consumes energy for
communication only, which is still larger as the data size of
tasks is assumed to be large and communication resources are
not allocated optimally. Random offloading performs worst
as the data segmentation and communication is not optimal,
which leads to higher energy consumption for both locally
execution and offloading.
Next, we analyse the effect of increase in task size on
communication cost in order to provide more insight into the
optimal communication resource allocation policy. We assume
that each user has different task size and QoS requirement, and
plot the average communication cost in Fig. 4 by increasing
the task size for each user. It can be observed that the time
consumed for offloading in all the three compared cases
increases with the increase in task size. This can be explained
as each user has a limited computation capacity due to which
it tends to offload more computation on the edge server. On
the other hand, with increase in offloading data the communi-
cation resources also become scarce, which leads to increased
transmission delay. The comparison of the communication
cost for the three different cases shows that the proposed
algorithm achieves minimum communication delay due to
optimal subcarrier assignment and power allocation. Whereas,
the communication cost in case of complete offloading is
highest due to the data rate bottleneck. Although, random
offloading is better than complete offloading, the communica-
tion resource allocation is not optimal due to which the partial
offloading does not pay off.
Finally, we analyse the effect of edge computation capacity
on sum latency of the system in Fig. 5. The results show that
by increasing the edge capacity the performance is improved
for all the three schemes. This trend is obvious as the users
will tend to offload and perform most of the computation
remotely on the resourceful edge. However, exceeding the
0 50 100 150 200 250
Increase in task size (%)
Average Communication Cost (msec)
Proposed Scheme
Random Offloading Scheme
Complete Oflloading
Fig. 4. Effect of task size on time cost of local computing, offloading and
edge computing.
edge capacity beyond a certain limit does not improve the
performance any further, which shows that there exists a
critical value of edge computation capacity beyond which the
latency can not be reduced any more. Moreover, the compar-
ison shows that the proposed scheme achieves minimum sum
latency as compared to the other two schemes due to optimal
offloading and resource allocation. Whereas, beyond a critical
value of computation capacity the sum latency and becomes
invariant with further increase in computation capacity. It is
important to note that the edge capacity beyond 10 GHz is
hard to realise, thus the results for edge capacity above 10
GHz are insignificant.
In this paper, we jointly investigated partial offloading and
resource allocation for an OFDMA based multi-user MECO
system. To enhance the system performance and ensure energy
efficiency, we formulated sum latency minimization prob-
lem considering the constraints on edge computing latency,
expected energy cost for local computing, communication
and edge computation resources. We determined the optimal
offloading fraction at every user to improve edge performance
and save local energy, and then proposed a low complexity
algorithm for optimal communication and computation re-
source allocation. Numerical results showed that our proposed
scheme can achieve better performance both in terms of
sum latency and energy consumption than random offloading
scheme and complete offloading.
This work was supported in part by The National Key
Research and Development Program of China under grant
2017YFE0112300, the National Nature Science Foundation of
China under 61861136003, 61621091 and 61673237, Beijing
National Research Center for Information Science and Tech-
nology under 20031887521, and research fund of Tsinghua
University-Tencent Joint Laboratory for Internet Innovation
4 6 8 10 12
Average Latency (msec)
Proposed Scheme
Random Offloading Scheme
Complete Offloading
Fig. 5. Effect of edge computation capacity on sum latency of the system.
[1] J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami, “Internet of things
(iot): A vision, architectural elements, and future directions,” Future
generation computer systems, vol. 29, no. 7, pp. 1645–1660, 2013.
[2] P. Mach and Z. Becvar, “Mobile edge computing: A survey on archi-
tecture and computation offloading,” IEEE Communications Surveys &
Tutorials, vol. 19, no. 3, pp. 1628–1656, 2017.
[3] S. Sardellitti, G. Scutari, and S. Barbarossa, “Joint optimization of
radio and computational resources for multicell mobile-edge comput-
ing,” IEEE Transactions on Signal and Information Processing over
Networks, vol. 1, no. 2, pp. 89–103, 2015.
[4] C. You, K. Huang, H. Chae, and B.-H. Kim, “Energy-efficient resource
allocation for mobile-edge computation offloading,” IEEE Transactions
on Wireless Communications, vol. 16, no. 3, pp. 1397–1411, 2017.
[5] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation
offloading for mobile-edge cloud computing,” IEEE/ACM Transactions
on Networking, vol. 24, no. 5, pp. 2795–2808, 2016.
[6] J. Liu, Y. Mao, J. Zhang, and K. B. Letaief, “Delay-optimal computation
task scheduling for mobile-edge computing systems,” in Information
Theory (ISIT), 2016 IEEE International Symposium on. IEEE, 2016,
pp. 1451–1455.
[7] Y. Mao, J. Zhang, and K. B. Letaief, “Dynamic computation offloading
for mobile-edge computing with energy harvesting devices, IEEE
Journal on Selected Areas in Communications, vol. 34, no. 12, pp.
3590–3605, 2016.
[8] M. Molina, O. Mu˜
noz, A. Pascual-Iserte, and J. Vidal, “Joint scheduling
of communication and computation resources in multiuser wireless
application offloading,” in Personal, Indoor, and Mobile Radio Com-
munication (PIMRC), 2014 IEEE 25th Annual International Symposium
on. IEEE, 2014, pp. 1093–1098.
[9] Y. Mao, J. Zhang, S. Song, and K. B. Letaief, “Power-delay tradeoff in
multi-user mobile-edge computing systems,” in Global Communications
Conference (GLOBECOM), 2016 IEEE. IEEE, 2016, pp. 1–6.
[10] J. Ren, G. Yu, Y. Cai, Y. He, and F. Qu, “Partial offloading for latency
minimization in mobile-edge computing,” in GLOBECOM 2017-2017
IEEE Global Communications Conference. IEEE, 2017, pp. 1–6.
[11] K. Kumar and Y.-H. Lu, “Cloud computing for mobile users: Can
offloading computation save energy?” Computer, vol. 43, no. 4, pp.
51–56, 2010.
[12] F. Wang, J. Xu, X. Wang, and S. Cui, “Joint offloading and computing
optimization in wireless powered mobile-edge computing systems,
IEEE Transactions on Wireless Communications, vol. 17, no. 3, pp.
1784–1797, 2018.
[13] C. Wang, F. R. Yu, C. Liang, Q. Chen, and L. Tang, “Joint computation
offloading and interference management in wireless cellular networks
with mobile edge computing,” IEEE Transactions on Vehicular Tech-
nology, vol. 66, no. 8, pp. 7432–7445, 2017.
[14] C. Wang, C. Liang, F. R. Yu, Q. Chen, and L. Tang, “Computation
offloading and resource allocation in wireless cellular networks with
mobile edge computing,” IEEE Transactions on Wireless Communica-
tions, vol. 16, no. 8, pp. 4924–4938, 2017.
[15] K. Kim, Y. Han, and S.-L. Kim, “Joint subcarrier and power allocation
in uplink ofdma systems,” IEEE Communications Letters, vol. 9, no. 6,
pp. 526–528, 2005.
... We consider that each EuD has a computational task R to be completed in time period T . In our scenario, the partial offloading technique is adopted by EuDs [51], and R = [0, 1] is the offloading ratio range between 0 and 1. Resultant, 1−R is the remaining task to be executed locally by EuD i in time period T . ...
Full-text available
Advances in Unmanned Air Vehicle (UAV) technology have paved a way for numerous configurations and applications in communication systems. However, UAV dynamics play an important role in determining its effective use. In this article, while considering UAV dynamics, we evaluate the performance of a UAV equipped with a Mobile-Edge Computing (MEC) server that provides services to End-user Devices (EuDs). The EuDs due to their limited energy resources offload a portion of their computational task to nearby MEC-based UAV. To this end, we jointly optimize the computational cost and 3D UAV placement along with resource allocation subject to the network, communication, and environment constraints. A Deep Reinforcement Learning (DRL) technique based on a continuous action space approach, namely Deep Deterministic Policy Gradient (DDPG) is utilized. By exploiting DDPG, we propose an optimization strategy to obtain an optimal offloading policy in the presence of UAV dynamics, which is not considered in earlier studies. The proposed strategy can be classified into three cases namely; training through an ideal scenario, training through error dynamics, and training through extreme values. We compared the performance of these individual cases based on cost percentage and concluded that case II (training through error dynamics) achieves minimum cost i.e., 37.75 %, whereas case I and case III settles at 67.25% and 67.50% respectively. Numerical simulations are performed, and extensive results are obtained which shows that the advanced DDPG based algorithm along with error dynamic protocol is able to converge to near optimum. To validate the efficacy of the proposed algorithm, a comparison with state-of-the-art Deep Q-Network (DQN) is carried out, which shows that our algorithm has significant improvements.
... In the MEC design, the primary prerequisite is only partly offloading the UE's task for each time slot. [15]. The ratio of task that is offloaded to the edge server by the UE is denoted by M k (i) ∈ [0, 1] and the remaining task that is executed locally by the UE is denoted by (1 − M k (i)). ...
Conference Paper
The next generation network 5G and beyond will provide higher speed, greater capability and lower latency for high-end technologies like augmented reality, online gaming, robotic arm surgery, high-quality video streaming, etc. Mobile Edge Computing (MEC) brings computing, storage and networking resources closer to the end user to host the compute-intensive and latency-sensitive applications at the edge of the network. Presently, the UAV-equipped Mobile Edge Computing (MEC) system provides computation services to mobile devices on the ground. However, the issue of processing delay and energy consumption in the task offloading process needs to be addressed. In the present paper, a novel Deep Deterministic Policy Gradient (DDPG) based approach is proposed that shall reduce the processing time by simultaneously improving user scheduling, resource allotment and UAV maneuverability, and formulating computation offloading problem as a high non-convex objective function. The performance of the suggested approach is demonstrated by simulation results using real-world parameters and the obtained results are compared to state-of-the-art algorithms.
... Saleem et. al. [21] studied the problem of minimizing latency by considering the local energy constraint, while taking into account the limited energy availability at the user. This has a high impact on the data segmentation decision. ...
Full-text available
In recent years, eXtended Reality (XR) applications have been employed increasingly in various scenarios in tourism, health care, education, manufacturing, etc. Such applications are now accessible via mobile devices, wearables devices, tablets, etc. However, mobile devices normally suffer from constraints in terms of battery capacity and processing power, limiting the range of applications supported or lowering user quality of experience when using them. One effective way to address these issues is to offload the computation to the cloud servers. The inherent limitation of the cloud computing approach is the long propagation distance to the end user from the processing server, that may result in long latency which is not tolerable by many mobile XR applications. To overcome such limitations, Multi-access Edge Computing (MEC) is proposed to bring the mobile computing, network control and storage services to the network edges (for example at base stations, access points, etc) so that the computation-intensive and latency-sensitive applications can be deployed at the resource limited mobile devices. This paper proposes a Deep Reinforcement Learning-based offloading scheme for XR applications (DRLXR). The problem is formulated as a utility function optimization equation that takes into account both energy consumption and execution delay at devices and the Markov Decision Process (MDP) framework is employed as a decision maker. Next the Deep Reinforcement Learning (DRL) technique is employed to train and derive the close-to-optimal offloading decision for mobile XR devices. The proposed DRLXR scheme is then validated in a simulation environment and compared against other novel offloading schemes. The simulation results show how our proposed scheme outperforms the other counterparts in terms of total execution latency and energy consumption.
... In view of the OFDMA mechanism, interference is ignored due to the exclusive subcarrier allocation [25,[32][33][34]. Therefore, we do not consider interference from other IoT devices in this article. ...
Full-text available
Mobile edge computing (MEC) has become an indispensable part of the era of the intelligent manufacturing industry 4.0. In the smart city, computation-intensive tasks can be offloaded to the MEC server or the central cloud server for execution. However, the privacy disclosure issue may arise when the raw data is migrated to other MEC servers or the central cloud server. Since federated learning has the characteristics of protecting the privacy and improving training performance, it is introduced to solve the issue. In this article, we formulate the joint optimization problem of task offloading and resource allocation to minimize the energy consumption of all Internet of Things (IoT) devices subject to delay threshold and limited resources. A two-timescale federated deep reinforcement learning algorithm based on Deep Deterministic Policy Gradient (DDPG) framework (FL-DDPG) is proposed. Simulation results show that the proposed algorithm can greatly reduce the energy consumption of all IoT devices.
... An online algorithm was proposed based on Lyapunov optimization for reduced power consumption. In [21], Saleem et al. jointly considered partial offloading and resource allocation to minimize the sum latency with energy efficiency for multi-user MEC offloading. An expression to determine the optimal offloading fraction was derived such that energy consumed for local execution would not exceed the desired limit. ...
Full-text available
In this paper, we consider an mmWave-based trainground communication system in the high-speed railway (HSR) scenario, where the computation tasks of users can be partially offloaded to the rail-side base station (BS) or the mobile relays (MRs) deployed on the roof of the train. The MRs operate in the full-duplex (FD) mode to achieve high spectrum utilization. We formulate the problem of minimizing the average task execution latency of all users, under local device and MRs energy consumption constraints. We propose a joint resource allocation and computation offloading scheme (JRACO) to solve the problem. It consists of a resource allocation and computation offloading (RACO) algorithm and an MR Energy constraint algorithm. RACO utilizes the matching game theory to iterate between two subproblems, i.e., data segmentation and user association and sub-channel allocation. With the RACO results, the MR energy constraint algorithm ensures that the MR energy consumption constraint is satisfied. Extensive simulations validate that JRACO can effectively reduce the average latency and increase the number of served users compared with three baseline schemes.
Edge computing enhances the processing capabilities of edge networks for processing mobile users’ jobs. Approaches that dispatch jobs to a single edge cloud are prone to cause task accumulation and excessive latency due to the uncertain workload and limited resources of edge servers. Offloading tasks to lightly-loaded neighbors, which are multiple hops away, alleviates the dilemma but increases transmission cost and security risks. Hence, how to realize the trade-off between computing latency, offloading cost and security during job dispatching is a great challenge. In this paper, we propose an online Deep learning-based model for Secure Collaborative Job Dispatching (DeepSCJD) in multiple edge clouds. Specifically, we first utilize bi-directional long short-term memory to predict the workload of edge servers and apply the graph neural networks to aggregate the features of directed acyclic graph jobs as well as undirected weighted topology of edge servers. Based on the state composed of these two features, a deep reinforcement learning agent including a simple deep Q network and linear branch, generates a final dispatching decision of tasks, aiming to achieve the smallest average weighted cost. Experiments on real-world data sets demonstrate the efficiency of proposed model and its superiority over traditional and state-of-the-art baselines, reaching the maximum average performance improvement of 54.16% relative to K-Hop. Extensive evaluations manifest the generalization of our model under various conditions.
In this paper, we consider an mmWave-based train-ground communication system in the high-speed railway (HSR) scenario, where the computation tasks of users can be partially offloaded to the rail-side base station (BS) or the mobile relays (MRs) deployed on the roof of the train. The MRs operate in the full-duplex (FD) mode to achieve high spectrum utilization. We formulate the problem of minimizing the average task execution latency of all users, under local device and MRs energy consumption constraints. We propose a joint resource allocation and computation offloading scheme (JRACO) to solve the problem. It consists of a resource allocation and computation offloading (RACO) algorithm and an MR Energy constraint algorithm. RACO utilizes the matching game theory to iterate between two subproblems, i.e., data segmentation and user association and sub-channel allocation. With the RACO results, the MR energy constraint algorithm ensures that the MR energy consumption constraint is satisfied. Extensive simulations validate that JRACO can effectively reduce the average latency and increase the number of served users compared with three baseline schemes.
Full-text available
Integrating mobile-edge computing (MEC) and wireless power transfer (WPT) is a promising technique in the Internet of Things (IoT) era. It can provide massive lowpower mobile devices with enhanced computation capability and sustainable energy supply. In this paper, we consider a wireless powered multiuser MEC system, where a multi-antenna access point (AP) (integrated with an MEC server) broadcasts wireless power to charge multiple users and each user node relies on the harvested energy to execute latency-sensitive computation tasks. With MEC, these users can execute their respective tasks locally by themselves or offload all or part of the tasks to the AP based on a time division multiple access (TDMA) protocol. Under this setup, we pursue an energy-efficient wireless powered MEC system design by jointly optimizing the transmit energy beamformer at the AP, the central processing unit (CPU) frequency and the offloaded bits at each user, as well as the time allocation among different users. In particular, we minimize the energy consumption at the AP over a particular time block subject to the computation latency and energy harvesting constraints per user. By formulating this problem into a convex framework and employing the Lagrange duality method, we obtain its optimal solution in a semi-closed form. Numerical results demonstrate the benefit of the proposed joint design over alternative benchmark schemes in terms of the achieved energy efficiency.
Full-text available
Mobile-edge computing (MEC) has recently emerged as a promising paradigm to liberate mobile devices from increasingly intensive computation workloads, as well as to improve the quality of computation experience. In this paper, we investigate the tradeoff between two critical but conflicting objectives in multi-user MEC systems, namely, the power consumption of mobile devices and the execution delay of computation tasks. A power consumption minimization problem with task buffer stability constraints is formulated to investigate the tradeoff, and an online algorithm that decides the local execution and computation offloading policy is developed based on Lyapunov optimization. Specifically, at each time slot, the optimal frequencies of the local CPUs are obtained in closed forms, while the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method. Performance analysis is conducted for the proposed algorithm, which indicates that the power consumption and execution delay obeys an [O (1/V); O (V)] tradeoff with V as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters to the system performance.
Mobile edge computing (MEC) has risen as a promising technology to augment computational capabilities of mobile devices. Meanwhile, in-network caching has become a natural trend of the solution of handling exponentially increasing Internet traffic. The important issues in these two networking paradigms are computation offloading and content caching strategies, respectively. In order to jointly tackle these issues in wireless cellular networks with mobile edge computing, we formulate computation offloading decision, resource allocation and content caching strategy as an optimization problem, considering the total revenue of the network. Furthermore, we transform the original problem into a convex problem and then decompose it in order to solve it in a distributed and efficient way. Finally, with recent advances in distributed convex optimization, we develop an alternating direction method of multipliers (ADMM) based algorithm to solve the optimization problem. The effectiveness of the proposed scheme is demonstrated by simulation results with different system parameters.
Mobile edge computing (MEC) has attracted great interests as a promising approach to augment computational capabilities of mobile devices. An important issue in the MEC paradigm is computation offloading. In this paper, we propose an integrated framework for computation offloading and interference management in wireless cellular networks with mobile edge computing. In this integrated framework, we formulate the computation offloading decision, physical resource block (PRB) allocation, and MEC computation resource allocation as optimization problems. The MEC server makes the offloading decision according to the local computation overhead estimated by all user equipments (UEs) and the offloading overhead estimated by the MEC server itself. Then, the MEC server performs the PRB allocation using graph coloring method. The outcomes of the offloading decision and PRB allocation are then used to distribute the computation resource of the MEC server to the UEs. Simulation results are presented to show the effectiveness of the proposed scheme with different system parameters.
Technological evolution of mobile user equipments (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. Suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud (CC). Nevertheless, this option introduces significant execution delay consisting in delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such delay is inconvenient and make the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: i) decision on computation offloading, ii) allocation of computing resource within the MEC, and iii) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.
Mobile-edge computation offloading (MECO) offloads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we study resource allocation for a multiuser MECO system based on time-division multiple access (TDMA) and orthogonal frequency-division multiple access (OFDMA). First, for the TDMA MECO system with infinite or finite computation capacity, the optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under the constraint on computation latency. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function, which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Moreover, for the cloud with finite capacity, a sub-optimal resource-allocation algorithm is proposed to reduce the computation complexity for computing the threshold. Next, we consider the OFDMA MECO system, for which the optimal resource allocation is formulated as a non-convex mixed-integer problem. To solve this challenging problem and characterize its policy structure, a sub-optimal low-complexity algorithm is proposed by transforming the OFDMA problem to its TDMA counterpart. The corresponding resource allocation is derived by defining an average offloading priority function and shown to have close-to-optimal performance by simulation.
Mobile-edge computing (MEC) is an emerging paradigm to meet the ever-increasing computation demands from mobile applications. By offloading the computationally intensive workloads to the MEC server, the quality of computation experience, e.g., the execution latency, could be greatly improved. Nevertheless, as the on-device battery capacities are limited, computation would be interrupted when the battery energy runs out. To provide satisfactory computation performance as well as achieving green computing, it is of significant importance to seek renewable energy sources to power mobile devices via energy harvesting (EH) technologies. In this paper, we will investigate a green MEC system with EH devices and develop an effective computation offloading strategy. The execution cost, which addresses both the execution latency and task failure, is adopted as the performance metric. A low-complexity online algorithm, namely, the Lyapunov optimization-based dynamic computation offloading (LODCO) algorithm is proposed, which jointly decides the offloading decision, the CPU-cycle frequencies for mobile execution, and the transmit power for computation offloading. A unique advantage of this algorithm is that the decisions depend only on the instantaneous side information without requiring distribution information of the computation task request, the wireless channel, and EH processes. The implementation of the algorithm only requires to solve a deterministic problem in each time slot, for which the optimal solution can be obtained either in closed form or by bisection search. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Sample simulation results shall be presented to verify the theoretical analysis as well as validate the effectiveness of the proposed algorithm.