ArticlePDF Available

Distributed deep learning for cooperative computation offloading in low earth orbit satellite networks

Authors:

Abstract and Figures

Low earth orbit (LEO) satellite network is an important development trend for future mobile communication systems, which can truly realize the 'ubiquitous connection' of the whole world. In this paper, we present a cooperative computation offloading in the LEO satellite network with a three-tier computation architecture by leveraging the vertical cooperation among ground users, LEO satellites, and the cloud server, and the horizontal cooperation between LEO satellites. To improve the quality of service for ground users, we optimize the computation offloading decisions to minimize the total execution delay for ground users subject to the limited battery capacity of ground users and the computation capability of each LEO satellite. However, the formulated problem is a large-scale nonlinear integer programming problem as the number of ground users and LEO satellites increases, which is difficult to solve with general optimization algorithms. To address this challenging problem, we propose a distributed deep learning-based cooperative computation offloading (DDLCCO) algorithm, where multiple parallel deep neural networks (DNNs) are adopted to learn the computation offloading strategy dynamically. Simulation results show that the proposed algorithm can achieve near-optimal performance with low computational complexity compared with other computation offloading strategies.
Content may be subject to copyright.
EMERGING TECHNOLOGIES & SERVICES
Distributed Deep Learning for Cooperative Computation
Offloading in Low Earth Orbit Satellite Networks
Qingqing Tang1, Zesong Fei1,*, Bin Li2,3
1School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
2School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, China
3Key Lab of Broadband Wireless Communication and Sensor Network Technology, Nanjing University of Posts and
Telecommunications, Ministry of Education, Nanjing 210003, China
*The corresponding author, email: Feizesong@bit.edu.cn
Abstract: Low earth orbit (LEO) satellite network
is an important development trend for future mobile
communication systems, which can truly realize the
“ubiquitous connection” of the whole world. In this
paper, we present a cooperative computation offload-
ing in the LEO satellite network with a three-tier com-
putation architecture by leveraging the vertical coop-
eration among ground users, LEO satellites, and the
cloud server, and the horizontal cooperation between
LEO satellites. To improve the quality of service for
ground users, we optimize the computation offload-
ing decisions to minimize the total execution delay
for ground users subject to the limited battery capac-
ity of ground users and the computation capability of
each LEO satellite. However, the formulated problem
is a large-scale nonlinear integer programming prob-
lem as the number of ground users and LEO satel-
lites increases, which is difficult to solve with general
optimization algorithms. To address this challenging
problem, we propose a distributed deep learning-based
cooperative computation offloading (DDLCCO) algo-
rithm, where multiple parallel deep neural networks
(DNNs) are adopted to learn the computation offload-
ing strategy dynamically. Simulation results show that
the proposed algorithm can achieve near-optimal per-
formance with low computational complexity com-
pared with other computation offloading strategies.
Keywords: LEO satellite networks; computation of-
floading; deep neural networks
I. INTRODUCTION
1.1 Backgrounds and Motivations
The rapid development of mobile communication
technology has brought many emerging applications
such as Augmented Reality (AR), and Virtual Real-
ity (VR), which poses new challenges to the current
networks [13]. Firstly, the limited coverage of tra-
ditional terrestrial communication networks is chal-
lenging to meet the needs of ground users to access
the network anytime and anywhere, especially in ru-
ral areas, isolated islands, and sea areas without ter-
restrial communication infrastructure. Secondly, tradi-
tional terrestrial communication infrastructure is vul-
nerable to damage from natural disasters such as earth-
quakes, causing ground users to lose communication
connections with each other. To overcome these short-
comings of terrestrial communication networks, satel-
lite communication networks have emerged. Com-
pared with terrestrial communication networks, satel-
lite communication networks have a wide coverage
area and can achieve ubiquitous global coverage. In
recent years, satellite communication networks have
made great progress, especially for low earth orbit
(LEO) satellites. LEO satellite networks are deemed
the most promising satellite mobile communication
systems due to their low orbital height, short transmis-
sion delay, and small path loss.
However, emerging applications such as intelligent
transportation and games are computation and energy-
230 © China Communications Magazine Co., Ltd. ·April 2022
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
intensive applications [4,5], which makes the LEO
satellite network not only need to provide ground users
with ubiquitous connections around the world but also
need to provide ground users with computing service
supports. In general, ground users in remote moun-
tainous areas without the support of terrestrial net-
work communication infrastructure can only offload
computation tasks to the remote cloud server for pro-
cessing through bent pipe transmission [6]. However,
bent pipe transmission requires ground users to offload
computation tasks to the LEO satellite network first,
and then the LEO satellite forwards the received com-
putation tasks to the cloud server for processing. As
a result, bent pipe transmission will increase the pro-
cessing delay of computation tasks and may not sat-
isfy the low-latency requirements of ground users. In-
spired by the terrestrial multi-access edge (MEC) tech-
nology [79], the MEC technology is introduced into
the LEO satellite network to sink the rich computing
resources of the cloud server to the edge of the LEO
satellite network [10]. Therefore, the LEO satellite
network can directly process computation tasks from
ground users, reducing the task processing delay of
ground users.
Recent years have witnessed research progress on
computation offloading in LEO satellite networks.
The work in [11] proposed a space-ground-sea inte-
grated network architecture, where LEO satellites and
unmanned aerial vehicles (UAVs) provide users with
edge computing services to optimize the offloading de-
cisions of users. The authors of [12] proposed a net-
work framework that uses ground base stations, high
altitude platforms (HAPs), and LEO satellites to pro-
vide offloading services for ground users. In [13], the
authors proposed a satellite-ground integrated network
with dual-edge computing capabilities to reduce the
energy consumption and delay of ground users, where
the Hungarian algorithm was used to solve the com-
putation offloading problem. Although the LEO satel-
lite network with edge computing has been prelimi-
narily studied, several problems in this network have
not yet been resolved. Firstly, LEO satellites can only
be equipped with lightweight MEC servers due to load
limitations. Therefore, when a large number of com-
putation tasks from ground users are offloaded to the
same LEO satellite at the same time, it may cause the
computation overload of the LEO satellite. Secondly,
the existing research uses a traditional optimization al-
gorithm to deal with the problem of computation of-
floading in LEO satellite networks. However, the tra-
ditional optimization algorithm requires multiple it-
erations to adjust the offloading decision to the opti-
mum [14], which leads to high computational com-
plexity and is not suitable for real-time computation
offloading problems in LEO satellite networks with
time-varying environments. Thirdly, the existing re-
search only considers the LEO satellite network with
local and edge computing while ignoring the remote
cloud computing with abundant computing resources.
1.2 Our Solutions and Contributions
Inspired by the above challenges, this paper proposes
a cooperative computation offloading in LEO satel-
lite networks with a three-tier computation architec-
ture, which has the following advantages. Firstly,
considering that LEO satellites can exchange infor-
mation through inter-satellite links (ISLs), we design
an inter-satellite cooperative computation offloading
strategy in this network. Under this framework, the
computation tasks of overloaded LEO satellites can
be forwarded to other lightly-loaded LEO satellites
for processing, which can balance the computation
load of LEO satellite networks and achieve better re-
source utilization. Secondly, we propose a distributed
deep learning-based cooperative computation offload-
ing (DDLCCO) algorithm to solve the problem of real-
time computation offloading of LEO satellite networks
in a time-varying environment. The DDLCCO algo-
rithm can dynamically adjust the offloading decisions
according to the requirements of ground users. Com-
pared with the traditional optimization algorithm, this
algorithm has low computational complexity and is
more suitable for computation offloading in a real net-
work environment. Thirdly, to make full use of the
computing resources in the LEO satellite network, we
consider not only the horizontal cooperation between
LEO satellites but also the vertical cooperation among
ground users, LEO satellites, and the cloud server.
Based on the proposed network, an optimization
problem for minimizing the total execution delay of
ground users that satisfies the constraints of the limited
battery capacity of ground users and the computation
capability of each LEO satellite is proposed. However,
the formulated problem is a large-scale nonlinear inte-
ger programming problem as the number of ground
© China Communications Magazine Co., Ltd. ·April 2022 231
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
users and LEO satellites increases, which is difficult
to solve with general optimization algorithms. To ad-
dress this challenging problem, we propose a DDL-
CCO algorithm that uses Kparallel deep neural net-
works (DNNs) to quickly and efficiently generate of-
floading decisions to obtain suboptimal solutions to
the formulated optimization problem. Compared with
a single DNN, these Kparallel DNNs use different
training parameters such as weights, resulting in large
differences in output data of DNNs, which can accel-
erate the convergence of the algorithm. The main con-
tributions of this paper are summarized as follows:
1) For better utilization of computation resources,
this paper proposes a cooperative computation of-
floading in LEO satellite networks with a three-
tier computation architecture. In this network, the
formulated optimization problem is considered
to minimize the total execution delay of ground
users with the constraints of the limited battery
capacity of ground users and the computation ca-
pability of each LEO satellite.
2) The formulated optimization problem is a large-
scale nonlinear integer programming problem,
and the computational complexity of this prob-
lem will increase dramatically as the number of
LEO satellites and ground users increases. To this
end, we propose a DDLCCO algorithm to find
the near-optimal solution, where multiple parallel
DNNs are used to generate offloading decisions
in a distributed fashion effectively.
3) Simulation results show that the convergence of
the DDLCCO algorithm can be accelerated by us-
ing multiple parallel DNNs compared with a sin-
gle DNN. In addition, the gap between the pro-
posed algorithm and the enumeration algorithm is
relatively small, which means that the proposed
algorithm has better performance.
The remainder of this paper is organized as follows.
The related works are presented in Section II. In Sec-
tion III, the system model of the three-tier coopera-
tive computation offloading network is presented. Sec-
tion IV describes the formulated optimization prob-
lem. Section Vintroduces the DDLCCO algorithm.
Section VI presents the simulation results with the pro-
posed algorithm. Finally, this paper is concluded in
Section VII.
II. RELATED WORK
Currently, a range of literature concerns the computa-
tion offloading problem in LEO satellite networks to
reduce the energy consumption or execution delay of
ground users [1519].
To elaborate a little further, in [15], the authors
proposed a hybrid cloud and edge computing LEO
satellite network to reduce the energy consumption of
ground users, where the alternating direction method
of multipliers algorithm was used to solve the com-
putation offloading problem. The authors of [16]
proposed a computation offloading strategy based on
game theory in LEO satellite networks to minimize
the response time and energy consumption of com-
putation tasks. [17] used a dynamic network virtu-
alization technology to integrate the computation re-
sources within the coverage of LEO satellites to min-
imize user perception delay and energy consumption.
In addition, considering the heterogeneity of resources
in LEO satellite networks, the authors of [18] pro-
posed a satellite-ground integrated network to dynam-
ically manage the computing resources and spectrum
resources of this network, and a deep learning algo-
rithm was adopted to solve the joint resource alloca-
tion optimization problem. The work in [19] proposed
a space-air-ground integrated computing architecture,
where the computation tasks from ground/air users can
be processed on HAPs or offloaded to LEO satellites.
Furthermore, a joint user association and offloading
decision optimization problem were studied with the
goal of maximizing the sum rate of ground users.
From the above analysis, the existing works mainly
focus on the vertical cooperation among ground users,
LEO satellites, or the cloud server while ignoring the
horizontal cooperation between LEO satellites. More-
over, most of the existing works use the general opti-
mization algorithm to solve the problem of computa-
tion offloading or resource allocation, which leads to
high computational complexity and is not suitable for
real-time computation offloading in LEO satellite net-
works with time-varying environments. In this paper,
we consider not only the vertical cooperation among
ground users, LEO satellites, and the cloud server but
also the horizontal cooperation between LEO satel-
lites. In addition, a DDLCCO algorithm is proposed
for computation offloading in LEO satellite networks
with time-varying environments.
232 © China Communications Magazine Co., Ltd. ·April 2022
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
Figure 1. The system model of a cooperative computation
offloading in LEO satellite networks with a three-tier com-
putation architecture.
III. SYSTEM MODEL
In this section, we present a system model for
the three-tier cooperative computation offloading net-
work, which includes the network model, cover-
age model, communication model, and computation
model.
3.1 Network Model
We consider a cooperative computation offloading in
LEO satellite networks with a three-tier computation
architecture as shown in Figure 1, which includes M
LEO satellites, Iground users and a remote cloud
server. The set of all LEO satellites and ground users
can be denoted as M={1,2,3, . . . , M }and I=
{1,2,3, . . . , I}, respectively. Each LEO satellite is
equipped with a lightweight MEC platform such as
Docker, and it can be considered as an edge computing
node. Furthermore, each LEO satellite is connected to
the remote cloud server via feeder links, and multi-
ple neighboring LEO satellites can communicate with
each other via ISLs.
In the considered network, each ground user has a
computation task Wi
= (Di, Xi)to be either pro-
cessed by itself, by LEO satellites, or by the remote
cloud server. Direpresents the size of the input com-
putation task, and Xidenotes the required central pro-
cessing unit (CPU) cycles to accomplish the computa-
tion task Wi. Specifically, when LEO satellite mre-
ceives an offloading request from a ground user, the
LEO satellite can process the computation task by it-
self, or forward it to other LEO satellites with remain-
ing computation resources, or further forward it to the
cloud server for processing. Note that the computation
tasks of ground users cannot be partitioned [20], and
the size of the input computation tasks changes over
time; that is, the requirements of ground users change
over time. The notations used in the rest of this paper
are summarized in Table 1.
3.2 Coverage Model
LEO satellites are characterized by high-speed move-
ment, and thus the communication between ground
users and LEO satellites is different from ground com-
munication networks. According to [21], LEO satel-
lites can only communicate with ground users in a cer-
tain period, which can be characterized by the eleva-
tion angle between ground users and LEO satellites.
The elevation angle between ground users and LEO
satellites can be calculated by
ϖ= arccos Re+h
s·sin γ,(1)
where hdenotes the distance between the ground user
and the LEO satellite orbit, Reexpresses the radius of
the earth, sis the distance between the ground user
and the LEO satellite, and γstands for the geocentric
angle.
Considering that the MEC server equipped on the
LEO satellite is a lightweight computing platform,
when a large number of ground users send computa-
tion offloading requests to the same LEO satellite, the
LEO satellite may be overloaded. Since LEO satel-
lites can exchange information through ISLs to obtain
the remaining computing resource status, the compu-
tation tasks of ground users can be completed through
the cooperation between LEO satellites. Specifically,
the computation tasks of ground users received by
overloaded LEO satellites can be forwarded to other
lightly-loaded LEO satellites for processing through
the ISLs, which can balance the computation load of
the LEO satellite network and achieve better resource
utilization.
© China Communications Magazine Co., Ltd. ·April 2022 233
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
Table 1. Notation.
Notation Definition
MSet of LEO satellites
ISet of ground users
hDistance among ground users and
LEO satellites orbit
ReRadius of the earth
sDistance among ground users and
LEO satellites
ϖElevation angle
γGeocentric angle
gi,m
Channel gain from ground user ito
LEO satellite m
BAvaliable spectrum bandwidth
piUplink transmit power
Ri,m Uplink transmit rate
DiData size of computation task Wi
XiRequired CPU cycles
fL
i,fS
i,m,fC
iAvailable computation capability
TL
i, T S
i,m, T C
iExecution time of task Wi
EL
i, ES
i,m Energy consumption of task Wi
3.3 Communication Model
We assume that each ground user is only associated
with one LEO satellite within a time slot, and each
ground user has only one computation task to be of-
floaded in each time slot. Furthermore, we consider
that the spectrum used by ground users is overlapped,
which implies that there exists interference between
ground users. According to [22], the uplink transmis-
sion rate of a ground user that chooses to offload its
computation task to the LEO satellite through a wire-
less link can be denoted as
Ri,m =Blog2 1 + gi,mpi
Pj∈I\{i}gj,m pj+σ2!,(2)
where gi,m expresses the channel gain between ground
user iand LEO satellite m,Bis the available spec-
trum bandwidth, pidenotes the uplink transmit power
of ground user i, and σ2represents the additive white
Gaussian noise (AWGN) power.
In general, the size of the input computation task is
much larger than the size of computation results [23].
Thus, the delay caused by transmitting computation
results to ground users is ignored in this paper.
3.4 Computation Model
Through the above analysis, there are three schemes
for ground users to process the computation tasks. Let
ai {0,1}denote whether the computation task Wi
of ground user iis processed by itself, where ai= 1
denotes that the computation task Wiis computed by
ground user i; otherwise, ai= 0. Let bi,m {0,1}
indicate whether the computation task Wiof ground
user iis processed by the associated LEO satellite m,
where bi,m = 1 denotes the computation task Wiis of-
floaded to LEO satellite m; otherwise, bi,m = 0. Sim-
ilarly, let ci {0,1}express whether the computation
task Wiis processed by the cloud server, where ci= 1
denotes the computation task Wiis executed by the
cloud server; otherwise, ci= 0. Considering that each
computation task has only one offloading decision in
each time slot, the offloading decision of ground user
ineed to satisfy the following constraint,
ai+X
m∈M
bi,m +ci= 1,i. (3)
IV. PROBLEM FORMULATION FOR
COMPUTATION OFFLOADING
SCHEME
In Subsection 4.1, the computation cost for different
offloading schemes is discussed. Then, the formulated
optimization problem for minimizing the sum execu-
tion delay of ground users is studied in Subsection 4.2.
4.1 Computation Cost
According to different offloading schemes, the compu-
tation cost in terms of energy consumption and delay
for ground users are different.
1) Local computing: For local computing, we define
fL
ias the local computation capability of ground
user i. Thus, the execution time of computation
task Wiprocessed by ground user ican be calcu-
lated by
TL
i=Xi
fL
i
,i, (4)
and the energy consumption EL
iof computation
task Wiprocessed by ground user ican be ex-
pressed as
EL
i=εfL
i2Xi,i, (5)
234 © China Communications Magazine Co., Ltd. ·April 2022
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
where εexpresses the energy coefficient, and its
size relies on the chip architecture [24].
2) LEO satellite computing: For LEO satellite com-
puting, we set fS
i,m as the computation capability
(CPU cycles/s) allocated to ground user iby LEO
satellite m. Therefore, the computation delay of
computation task Wicomputed by LEO satellite
mcan be denoted as
Tcomp
i,m =Xi
fS
i,m
,i, m. (6)
When a large number of computation tasks from
ground users are offloaded to the same LEO satel-
lite at the same time, the LEO satellite will be
overloaded. Hence, the computation tasks on the
overloaded LEO satellite needs to be forwarded
to other LEO satellites for processing. Let Ttr
m,k
stand for the average round trip time of trans-
fer computation task Wifrom LEO satellite mto
LEO satellite k. The round trip time can be esti-
mated using the average values of historical infor-
mation [25]. Moreover, Ttr
m,k = 0 when m=k,
since there is no computation task to transfer in
the same LEO satellite. Therefore, if the compu-
tation task Wiis finally computed at LEO satellite
k, the total delay consists of the transmission de-
lay between ground user iand LEO satellite m,
the propagation delay between ground user iand
LEO satellite m, the transfer delay between LEO
satellite mand LEO satellite k, and the comput-
ing delay at LEO satellite k. The total delay of
computation task Wiexecuted by LEO satellite k
can be denoted as
TS
i,k =Ttrans
i,m +Tprop
i,m +Ttr
m,k +Tcomp
i,k ,i, m,
(7)
where the transmission delay between ground
user iand LEO satellite mcan be obtained by
Ttrans
i,m =Di
Ri,m
,i, m, (8)
and the propagation delay between ground user i
and LEO satellite mcan be calculated by
Tprop
i,m =si,m
v,i, m, (9)
where vis the speed of light, si,m denotes the dis-
tance between ground user iand LEO satellite m
and can be obtained by
si,m
=qR2
e+ (Re+h)22·Re·(Re+h)·cos γ.
(10)
Furthermore, the energy consumption ES
i,m of
ground user ifor offloading computation task Di
to LEO satellite mcan be calculated by
ES
i,m =pi
Di
Ri,m
,i, m. (11)
3) Cloud computing: For cloud computing, the com-
putation task Wiis processed by the remote cloud
server. Specifically, if the computation task Wiis
offloaded to the cloud server, ground user ifirstly
transmits the computation task Wito LEO satel-
lite mvia a wireless link. Then, LEO satellite
mforwards the received computation task Wito
the cloud server through a feeder link. Let fC
i
denote the computation capability (CPU cycles/s)
allocated to ground user iby the cloud server [26].
The total delay TC
i,m of computation task Wipro-
cessed by the cloud server includes the transmis-
sion delay between ground user iand LEO satel-
lite m, the propagation delay between ground user
iand LEO satellite m, the delay for transmit-
ting computation task Wifrom LEO satellite m
to the cloud server, and the computing delay at
the cloud server. Thus, the total delay of compu-
tation task Wiprocessed by the cloud server can
be expressed as
TC
i=Ttrans
i,m +Tprop
i,m +Tback
i+Tcomp
i,i, m,
(12)
where the computing delay of computation task
Wiprocessed by the cloud server can be denoted
as
Tcomp
i=Xi
fC
i
,i, (13)
and the delay of transmitting computation task Wi
from LEO satellite mto the cloud server can be
calculated by
Tback
i=Di
r,i, (14)
© China Communications Magazine Co., Ltd. ·April 2022 235
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
where ris the rate for transmitting computation
task Wifrom LEO satellite mto the cloud server.
Note that the delay caused by transmitting compu-
tation results from the cloud server to LEO satel-
lite mis ignored in this paper. Furthermore, it can
be seen that the energy consumption of ground
users to offload computation tasks to the cloud
server or LEO satellites remains unchanged.
4.2 Problem Formulation
To improve the quality of service for ground users,
we formulate the cooperative computation offloading
problem in LEO satellite networks to minimize the to-
tal execution delay of ground users while considering
the limited battery capacity of ground users and the
computation capability of LEO satellites. Let Xi=
{ai, bi,1, bi,2, . . . , bi,M , ci}denote the computation of-
floading vector of ground user iand X={Xi, i I}
express the computation offloading decisions for all
ground users. Mathematically, the problem of inter-
est reads
min
X
I
X
i=1
M
X
m=1
aiTL
i+bi,mTS
i,m +ciTC
i,(15a)
s.t ai+X
m∈M
bi,m +ci= 1,i, (15b)
aiEL
i+ (1 ai)ES
i,m Emax
i,i, m, (15c)
X
i∈I
bi,mXiZm,m, (15d)
ai, bi,m, ci {0,1},i, m, (15e)
where Emax
iis the maximum battery capacity of
ground user i, and Zmis the maximum computa-
tion capability of LEO satellite m. (15a) is the ob-
jective function of the formulated optimization prob-
lem, which represents the total execution delay of all
ground users. Furthermore, (15b) and (15c) indicate
that each ground user has only one offloading decision
to process the computation task and the total energy
consumption of each ground user cannot exceed its
maximum battery capacity, respectively. (15d) is the
maximum computation capability constraint for LEO
satellites, and (15e) represents that the ground user has
three different offloading decisions to choose from.
However, the formulated optimization problem is
a large-scale nonlinear integer programming problem
as the number of ground users and LEO satellites in-
creases. In addition, since the objective function and
constraints of the formulated optimization problem
contain binary variables, the problem is NP-hard. In
general, this challenging problem can be reformulated
by traditional relaxation methods and then solved by
using convex optimization techniques [14]. However,
the traditional optimization algorithm requires a large
number of iterations to adjust the offloading decision
to the optimum, which leads to high computational
complexity and is not suitable for real-time compu-
tation offloading in the LEO satellite network with a
time-varying environment. To effectively solve this
problem, we propose a DDLCCO algorithm to obtain
suboptimal solutions in the following section.
V. DISTRIBUTED DEEP LEARNING-
BASED COOPERATIVE COMPUTA-
TION OFFLOADING SCHEME
To find a satisfactory solution for the formulated op-
timization problem, we propose a DDLCCO algo-
rithm, which includes offloading decision generation
and deep learning. Specifically, we first give an intro-
duction to DNN in Subsection 5.1. Then, an overview
of the DDLCCO algorithm is described in Subsection
5.2. Finally, the offloading decision generation and
deep learning are described in Subsection 5.3 and 5.4,
respectively.
5.1 Deep Neural Network (DNN)
Before introducing the DNN model, we first give a
brief introduction to the perceptron since the DNN
model is an extension of the perceptron. As shown
in Figure 2(a), the perceptron consists of three inputs,
a neuron and an output. Through this neuron, the lin-
ear relationship between output and input is learned to
get an output (but not the final output), which can be
denoted as
z=
3
X
i=1
wixi+b, (16)
where wiand bdenote the weights and bias, respec-
tively. Then, the output of the perceptron can be ob-
tained by
y=δ(z),(17)
where δ(·)is the activation function. As for the choice
of the activation function, it mainly depends on what
236 © China Communications Magazine Co., Ltd. ·April 2022
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
Figure 2. The perceptron and DNN model.
kind of results we want to output, e.g., if we need to
output the result as y {−1,1}, then we can choose
sign (z)as the activation function.
The neural network is an extension of the percep-
tron, and DNN can be interpreted as the neural net-
work with multiple hidden layers, as shown in Figure
2(b). The layers of DNN are fully connected, which
means that any neuron in the i-th layer must be con-
nected to any neuron in the (i+ 1)-th layer [27,28].
The learning process of DNN is composed of the for-
ward propagation process and back propagation (BP)
process. In the forward propagation process, the train-
ing samples are first input to the input layer, then passe
through the hidden layers, and finally reach the output
layer and outputs a result. Since there is an error be-
tween the outputs of DNN and actual values of sam-
ples, we need to calculate the error between the output
values and the actual values and then propagate the
error from the output layer to the input layer. In the
process of BP, we need to adjust the values of weights
to minimize this error continuously. In general, the
error between the output values and the actual values
can be expressed as a loss function. The purpose of
DNN is to minimize the loss function through training
to obtain the model that we need.
5.2 An Overview of DDLCCO Algorithm
The structure of the DDLCCO algorithm is shown in
Figure 3, which consists of offloading decision gener-
ation and deep learning. The generation of offloading
decisions mainly depends on Kparallel DNNs, which
are characterized by their embedded parameters, such
as the weights of connected hidden neurons. Let θk
denote the embedded parameter of DNN k. At the t-
th time slot, each DNN first takes the offloading data
Dtof ground users as input and then outputs a relaxed
offloading decision {ˆxk,t, k K}(which is a contin-
uous variable between 0 and 1). To meet the objective
function and constraints of problem (15), we need to
Figure 3. The structure of the DDLCCO algorithm.
map these continuous output variables to binary vari-
ables. Finally, the offloading decision x
tthat mini-
mizes problem (15) is chosen as the final output of
the offloading decision generation stage and stores the
newly obtained data (Dt,x
t)to the replay memory.
As for the deep learning stage in the t-th time slot,
a batch of samples is taken from memory to train
these Kparallel DNNs. Meanwhile, the parameters of
these Kparallel DNNs are updated according to the
loss function. Then, the above steps are repeated to
train these Kparallel DNNs until the entire network
reaches a steady state. The specific process of these
two stages is introduced in the following subsection.
5.3 Offloading Decision Generation
For the input offloading data Dtin the t-th time slot,
the parameters θk,t of these Kparallel DNNs are ran-
domly initialized. Note that these Kparallel DNNs
have the same structure, but their parameters θk,t are
different. Correspondingly, each DNN outputs a re-
laxed offloading decision ˆxk,t [0,1] according to the
parameterized function fθk,t , where ˆxk,t represents the
output of DNN kin the t-th time slot. Furthermore,
we adopt Rectified Linear Unit (ReLU) as the activa-
tion function in hidden layers to correlate the output
of the neuron with the input [29]. In the output layer,
we use the sigmoid function as the activation function,
e.g., y= 1/(1 + ex). However, the output of each
DNN is a continuous variable. To solve problem (15)
effectively, we need to map these obtained continuous
variables to binary variables. In this paper, we adopt
© China Communications Magazine Co., Ltd. ·April 2022 237
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
Algorithm 1. DDLCCO algorithm for computation offload-
ing.
1: Input offloading data Dtat t-th time slot;
2: Initialize the Kparallel DNNs with random pa-
rameters θk,t and empty memory;
3: Let σrepresent the training interval.
4: for t= 1,2,3, . . . , T do
5: Input offloading data Dtto all KDNNs;
6: Generate a relaxed offloading decision ˆxk,t =
fθk,t (Dt)from DNN k;
7: Map the continuous offloading decisions into
binary action xk,t;
8: Compute Q(Dt,xk,t)according to xk ,t;
9: Choose the best offloading decision according
to x
t= arg min Q(Dt,xk,t);
10: Update the memory by adding (Dt,x
t);
11: if tmod σ= 0 then
12: Randomly select Kbatches of training sam-
ples from the memory;
13: Train the DNNs and update θk,t using the
Adam algorithm;
14: end if
15: end for
the binary mapping method according to [30], which
can be expressed as
xk,t =1,ˆxk,t 0.5,
0,ˆxk,t <0.5.(18)
Therefore, the output continuous variables of these
Kparallel DNNs are mapped to binary variables. Af-
ter obtaining the output of Kparallel DNNs, these
variables are substituted into problem (15), and the
best offloading decision can be chosen by the follow-
ing formula,
x
t= arg min Q(Dt,xk,t).(19)
Here, we set Q(X) =
I
P
i=1
M
P
m=1
aiTL
i+bi,mTS
i,m +
ciTC
i.
5.4 Deep Learning
The optimal offloading decision x
tobtained by (19)
and its corresponding input offloading data Dt, i.e.,
Figure 4. The ratio versus different training steps.
(Dt,x
t), will be saved in an empty memory with lim-
ited capacity. When the memory is full, the newly gen-
erated sample will replace the oldest sample.
We use experience replay technology [31] to train
these Kparallel DNNs with samples stored in the
memory. Firstly, we randomly select a batch of train-
ing samples from memory. Then, the Adam algorithm
[32,33] is used to update the parameters of Kparallel
DNNs to minimize the cross-entropy loss. The cross-
entropy loss is calculated by
L(θk,t) = (xt)Tlog fθk,t (Dt)
(1 (xt))Tlog 1fθk,t (Dt).(20)
In this paper, we use the cross-entropy function as
the loss function since it can effectively accelerate the
convergence of the algorithm compared to other loss
functions such as mean square error. The detailed pro-
cess of the DDLCCO algorithm is shown in Algorithm
1.
VI. SIMULATION RESULTS
In this section, we evaluate the performance of the pro-
posed DDLCCO algorithm through simulations and
compare it with the following algorithms:
1) Vertical Cooperation: For vertical cooperation,
the computation task of the ground user can only
be processed by itself, by LEO satellites, or by the
cloud server.
2) Greedy: Since the MEC technology can usually
provide a lower computation delay, each ground
238 © China Communications Magazine Co., Ltd. ·April 2022
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
Figure 5. The learning loss versus different learning rate.
user will offload all computation tasks to the as-
sociated LEO satellite.
3) Enumeration: The enumeration algorithm is a
traditional optimization algorithm that finds the
optimal solution by searching all possible offload-
ing decisions of ground users. However, the com-
putational complexity of this algorithm is very
high, and thus we only evaluate its performance
in a small network.
In the simulation, the software environment is
Python 3.6 with Tensorflow and Matlab 2018b, and
the hardware environment is a GPU-based server. We
assume that there are 3 LEO satellites and 24 ground
users in the network, where 3 LEO satellites are in an
orbit of 784 kilometers (km). Furthermore, we assume
that ground users are randomly deployed in a fixed
area, and each ground user has only one computation
task to be processed in each time slot. The transmit
power of each ground user is 23 dBm, and the channel
bandwidth is 20 MHz. For the computation task, we
consider that the size of the input computation task is
randomly distributed between 1,000 kilobits (KB) and
5,000 KB, and the required CPU cycles to accomplish
the computation task is 1,000 Megacycles per second
(Mcycles/s). In addition, the computation capability
of ground users is 0.1 Gigacycles per second (Gcy-
cles/s). The computation capability allocated by LEO
satellites and the cloud server to each ground user is 3
Gcycles/s and 10 Gcycles/s [13], [34], respectively.
The layers of DNNs in our proposed DDLCCO al-
gorithm are fully connected and consist of one input
layer, two hidden layers, and one output layer. In ad-
dition, the first hidden layer has 120 neurons and the
second hidden layer has 80 neurons. The training in-
terval σand memory size are set to 10 and 1024, re-
spectively. Next, we will illustrate the advantages of
the proposed DDLCCO algorithm through simulation.
6.1 Convergence Performance
We use the ratio of the suboptimal solution obtained
by the proposed DDLCCO algorithm to the optimal
solution obtained by the enumeration algorithm as the
ordinate of Figure 4. To prove that using multiple
DNNs to generate offloading decisions has better per-
formance than a single DNN, we compare the changes
in the value of the ratio under different numbers of
DNNs. Intuitively, the higher the value of the ratio,
© China Communications Magazine Co., Ltd. ·April 2022 239
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
Figure 6. The total delay of ground users versus different
computation capability of ground users.
the closer the solution of the proposed DDLCCO al-
gorithm is to the optimal solution. It can be seen from
Figure 4that the value of the ratio increases as the
number of DNNs increases and gradually approaches
1. Another observation is that as the number of DNNs
increases, the convergence speed becomes faster. This
is because by using DNNs with different parameters,
the output results of different DNNs are different, and
the difference in output results will accelerate the con-
vergence of the algorithm. In this paper, the proposed
DDLCCO algorithm uses 3 parallel DNNs to gener-
ate offloading decisions, which not only speeds up the
convergence of the algorithm but also obtains a solu-
tion that is closest to the optimal solution.
Figure 5shows the relationship between the learn-
ing loss and training steps of the proposed DDL-
CCO algorithm when the learning rate is 0.01, 0.001,
0.0001, and 0.00001, respectively. It can be observed
from Figure 5that the learning rate will affect the per-
formance of learning because the learning rate is the
learning step length that minimizes the loss function.
The higher the learning rate, the faster the convergence
speed of the loss function, which indicates that the al-
gorithm approaches the suboptimal solution faster. As
a result, this paper chooses a learning rate of 0.01 to
train the DNN model because it has the best learning
performance.
6.2 System Performance
To prove that the proposed DDLCCO algorithm has
better performance advantages than other benchmarks,
Figure 7. The total delay of ground users versus different
computation requirement of ground users.
Figure 6compares the total delay of ground users
for the four algorithms versus the different compu-
tation capabilities of ground users. We can observe
that the total delay of these four algorithms decreases
as the computation capability of the ground user in-
creases. This is due to the fact that when the com-
putation capability of the ground user increases, the
ground user can process the computation tasks by it-
self without offloading them to LEO satellites or the
cloud server, which can reduce the delay for ground
users to transmit computation tasks to LEO satel-
lites. It is interesting to note that the total delay of
the proposed DDLCCO algorithm is lower than the
vertical cooperation algorithm and the greedy algo-
rithm, and the gap between the proposed DDLCCO
algorithm and the enumeration algorithm is relatively
small. The reason is that the proposed DDLCCO al-
gorithm provides ground users with multiple offload-
ing schemes, which not only considers the cooper-
ation among ground users, LEO satellites, and the
cloud server but also considers the cooperation be-
tween LEO satellites.
In Figure 7, we compare the total delay of ground
users for the four algorithms versus the different com-
putation requirements of ground users. In the experi-
ment, the total delay of these four algorithms increases
as the computation requirement of the ground user in-
creases. It can be explained by the fact that the com-
putation capability allocated to ground users by LEO
satellites and the cloud server and the local computa-
tion capability of ground users are fixed. According to
240 © China Communications Magazine Co., Ltd. ·April 2022
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
Figure 8. The total delay of ground users versus different
number of ground users.
(4), (6) and (13), it can be seen that the computation re-
quirement of ground users increases will cause ground
users, LEO satellites, and the cloud server to take more
time to process those computation tasks. In contrast,
the total delay of the proposed DDLCCO algorithm is
lower than that of the vertical cooperation algorithm
and the greedy algorithm and close to the enumera-
tion algorithm, which shows that the proposed DDL-
CCO algorithm can effectively reduce the total delay
of ground users.
Finally, Figure 8depicts the total delay of ground
users for the four algorithms versus the different num-
ber of ground users. Obviously, the total delay of
ground users of the four algorithms increases as the
number of ground users increases. This is because the
number of computation tasks increases as the number
of ground users increases, which means that there are
more computation tasks for ground users need to pro-
cess. However, the growth of the proposed DDLCCO
algorithm is much slower than the vertical cooperation
algorithm and the greedy algorithm, and the gap with
the enumeration algorithm is quite small. This is be-
cause the proposed DDLCCO algorithm can provide
more computation offloading opportunities for ground
users. Thus, the total delay of the proposed DDLCCO
algorithm is lower than other algorithms. Through the
above analysis, it can be seen that compared with other
benchmark algorithms, the proposed DDLCCO algo-
rithm can effectively reduce the total delay of ground
users.
VII. CONCLUSION
In this paper, we have introduced a cooperative com-
putation offloading in LEO satellite networks with a
three-tier computation architecture by leveraging the
vertical cooperation among ground users, LEO satel-
lites, and the cloud server, and the horizontal cooper-
ation between LEO satellites. To improve the quality
of service for ground users, we have formulated an op-
timization problem that minimizes the total execution
delay of ground users subject to the limited battery ca-
pacity of ground users and the computation capability
of each LEO satellite. Since the traditional optimiza-
tion algorithms cannot solve the real-time computation
offloading problem in LEO satellite networks with a
time-varying environment, we have proposed a DDL-
CCO algorithm consisting of Kparallel DNNs to gen-
erate offloading decisions effectively. Extensive nu-
merical results illustrated that the proposed DDLCCO
algorithm could accelerate the convergence speed of
the algorithm and effectively reduce the total execu-
tion delay of ground users.
ACKNOWLEDGEMENT
This work is partially supported by the National Key
R&D Program of China (2020YFB1806900), by Er-
icsson, by the Natural Science Foundation of Jiangsu
Province (No. BK20200822), by the Natural Science
Foundation of Jiangsu Higher Education Institutions
of China (No. 20KJB510036), and by the open re-
search fund of Key Lab of Broadband Wireless Com-
munication and Sensor Network Technology (Nanjing
University of Posts and Telecommunications), Min-
istry of Education (No. JZNY202103).
References
[1] D. C. Tomaso and B. Igor, “QoS optimisation of eMBB ser-
vices in converged 5G-satellite networks,” IEEE Transac-
tions on Vehicular Technology, vol. 69, no. 10, 2020, pp.
12 098–12 110.
[2] R. N. Vallina and J. Crowcroft, “Energy management tech-
niques in modern mobile handsets,” IEEE Communications
Surveys & Tutorials, vol. 15, no. 1, 2013, pp. 179–198.
[3] B. Li, Z. Fei, et al., “Physical layer security in space infor-
mation networks: A survey, IEEE Internet of Things Jour-
nal, vol. 7, no. 1, 2020, pp. 33–52.
[4] B. Lorenzo, R. J. Garcia, et al., “A robust dynamic edge net-
© China Communications Magazine Co., Ltd. ·April 2022 241
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
work architecture for the internet of things,” IEEE Network,
vol. 32, no. 1, 2018, pp. 8–15.
[5] L. Ji and S. Guo, “Energy-efficient cooperative resource al-
location in wireless powered mobile edge computing, IEEE
Internet of Things Journal, vol. 6, no. 3, 2019, pp. 4744–
4754.
[6] C. Dai, J. Luo, et al., “Dynamic user association for
resilient backhauling in satellite-terrestrial integrated net-
works,” IEEE Systems Journal, vol. 14, no. 4, 2020, pp.
5025–5036.
[7] S. Yu, R. Langar, et al., “Computation offloading with data
caching enhancement for mobile edge computing,” IEEE
Transactions on Vehicular Technology, vol. 67, no. 11,
2018, pp. 11 098–11112.
[8] J. Zhang, Q. Cui, et al., “Two-timescale online learning of
joint user association and resource scheduling in dynamic
mobile edge computing,” China Communications, vol. 18,
no. 8, 2021, pp. 316–331.
[9] C. Jiang, T. Cao, et al., “Intelligent task offloading and
collaborative computation over d2d communication, China
Communications, vol. 18, no. 3, 2021, pp. 251–263.
[10] R. Xie, Q. Tang, et al., “Satellite-terrestrial integrated edge
computing networks: Architecture, challenges, and open is-
sues,” IEEE Network, vol. 34, no. 3, 2020, pp. 224–231.
[11] F. Xu, F. Yang, et al., “Deep reinforcement learning based
joint edge resource management in maritime network,”
China Communications, vol. 17, no. 5, 2020, pp. 211–222.
[12] A. Alsharoa and M.-S. Alouini, “Improvement of the global
connectivity using integrated satellite-airborne-terrestrial
networks with resource optimization,” IEEE Transactions
on Wireless Communications, vol. 19, no. 8, 2020, pp.
5088–5100.
[13] Y. Wang, J. Zhang, et al., A computation offloading strat-
egy in satellite terrestrial networks with double edge com-
puting,” in 2018 IEEE International Conference on Com-
munication Systems (ICCS). Chengdu, China: IEEE, 2018,
pp. 1–6.
[14] C. Chi, W. Li, et al.,Convex optimization for signal pro-
cessing and communications: From fundamentals to appli-
cations. Boca Raton: CRC Press, 2017.
[15] Q. Tang, Z. Fei, et al., “Computation offloading in LEO
satellite networks with hybrid cloud and edge computing,
IEEE Internet of Things Journal, vol. 8, no. 11, 2021, pp.
9164–9176.
[16] Y. Wang, J. Yang, et al., “A game-theoretic approach to
computation offloading in satellite edge computing, IEEE
Access, vol. 8, 2019, pp. 12 510–12 520.
[17] Z. Zhang, W. Zhang, et al., “Satellite mobile edge comput-
ing: Improving QoS of high-speed satellite-terrestrial net-
works using edge computing techniques,” IEEE Network,
vol. 33, no. 1, 2019, pp. 70–17.
[18] C. Qiu, H. Yao, et al., “Deep Q-learning aided networking,
caching, and computing resources allocation in software-
defined satellite-terrestrial networks,” IEEE Transactions on
Vehicular Technology, vol. 68, no. 6, 2019, pp. 5871–5883.
[19] L. Zhang, H. Zhang, et al., “Satellite-aerial integrated com-
puting in disasters: User association and offloading deci-
sion,” in 2020 IEEE International Conference on Commu-
nications (ICC). Dublin, Ireland: IEEE, 2020, pp. 1–6.
[20] J. Ren, G. Yu, et al., “Latency optimization for resource
allocation in mobile-edge computation offloading, IEEE
Transactions on Wireless Communications, vol. 17, no. 8,
2018, pp. 5506–5519.
[21] R. E. Bruce, Introduction to Satellite Communication. Nor-
wood: Artech House, 1987.
[22] A. Abdi, W. C. Lau, et al., A new simple model for land
mobile satellite channels: First- and second-order statistics,”
IEEE Transactions on Wireless Communications, vol. 2,
no. 3, 2003, pp. 519–528.
[23] Q. Shi, L. Zhao, et al., “Energy efficiency versus delay
tradeoff in wireless networks virtualization, IEEE Trans-
actions on Vehicular Technology, vol. 67, no. 1, 2018, pp.
837–841.
[24] A. P. Miettinen and J. K. Nurminen, “Energy efficiency of
mobile clients in cloud computing,” in Proceedings of the
2nd USENIX conference on Hot topics in cloud computing.
Berkeley, USA: Computer Science, 2010, pp. 1–6.
[25] Y. Mao, J. Zhang, et al., “Dynamic computation offload-
ing for mobile-edge computing with energy harvesting de-
vices,” IEEE Transactions on Vehicular Technology, vol. 67,
no. 1, 2016, pp. 837–841.
[26] F. Wang, J. Xu, et al., “Joint offloading and computing
optimization in wireless powered mobile-edge computing
systems,” IEEE Transactions on Wireless Communications,
vol. 17, no. 3, 2018, pp. 1784–1797.
[27] H. Sun, X. Chen, et al., “Learning to optimize: Training
deep neural networks for wireless resource management,” in
2017 IEEE 18th International Workshop on Signal Process-
ing Advances in Wireless Communications (SPAWC). Sap-
poro, Japan: IEEE, 2017, pp. 1–6.
[28] H. Ye, G. Y. Li, et al., “Power of deep learning for chan-
nel estimation and signal detection in ofdm systems,” IEEE
Wireless Communications Letters, vol. 7, no. 1, 2018, pp.
114–117.
[29] L. Huang, S. Bi, et al., “Deep reinforcement learning for
online computation offloading in wireless powered mobile-
edge computing networks,” IEEE Transactions on Mobile
Computing, vol. 19, no. 11, 2020, pp. 2581–2593.
[30] S. Marsland, Machine learning: An algorithmic perspec-
tive. Boca Raton: CRC Press, 2015.
[31] V. Mnih, K. Kavukcuoglu, et al., “Human-level control
through deep reinforcement learning,” Nature, vol. 518, no.
7540, 2015, pp. 529–533.
[32] K. Diederik and B. Jimmy, Adam: A method for stochas-
tic optimization,” in International Conference on Learning
Representations. Ithaca, USA: Computer Science, 2015,
pp. 1–15.
[33] Z. Chang, Y. Zhang, et al., “Effective adam-optimized
LSTM neural network for electricity price forecasting,” in
2018 IEEE 9th International Conference on Software En-
gineering and Service Science (ICSESS). Beijing, China:
IEEE, 2018, pp. 1–6.
[34] N. Cheng, F. Lyu, et al., “Space/aerial-assisted comput-
ing offloading for IoT applications: A learning-based ap-
proach,” IEEE Journal on Selected Areas in Communica-
tions, vol. 37, no. 5, 2019, pp. 1117–1128.
Biographies
242 © China Communications Magazine Co., Ltd. ·April 2022
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
Qingqing Tang received the M.S. degree
in communication and information sys-
tems from the Guilin University of Elec-
tronic Technology, Guilin, China, in 2019.
She is currently pursuing the Ph.D. de-
gree with the School of Information and
Electronics, Beijing Institute of Technol-
ogy (BIT), Beijing, China. Her research
interests include wireless resource allocation, mobile edge com-
puting and satellite-terrestrial networks.
Zesong Fei received the Ph.D. degree in
electronic engineering from Beijing Insti-
tute of Technology (BIT), Beijing, China,
in 2004. He is currently a Professor
with the Research Institute of Commu-
nication Technology, Beijing Institute of
Technology (BIT), Beijing, China. He
serves as a Lead Guest Editor for Wireless
Communications and Mobile Computing, and China Communi-
cations Special Issue on Error Control Coding. He is the Chief
Investigator of the National Natural Science Foundation of China.
His research interests include wireless communications and multi-
media signal processing.
Bin Li received the Ph.D. degree in Infor-
mation and Communication Engineering
from the Beijing Institute of Technology,
China, in 2019. From 2013 to 2014, he
was a Research Assistant with the Depart-
ment of Electronic and Information Engi-
neering, Hong Kong Polytechnic Univer-
sity, Hong Kong, China. From 2017 to
2018, he was a Visiting Student with the Department of Infor-
matics, University of Oslo, Norway. Since 2019, he joined the
Nanjing University of Information Science and Technology, Nan-
jing, China. His research interests include unmanned aerial vehicle
communications and mobile edge computing.
© China Communications Magazine Co., Ltd. ·April 2022 243
Authorized licensed use limited to: Nanjing University of Information Science and Technology. Downloaded on May 22,2023 at 14:26:50 UTC from IEEE Xplore. Restrictions apply.
... Fortunately, researchers have intensively explored SEC strategies from different perspectives [12]- [30]. For instance, the problem of minimizing total energy consumption and/or latency of tasks has been investigated through the optimization of offloading decisions, and/or resource allocation under various network restrictions, e.g., computing capacity [12], [14], environmental dynamics [13], [29], [30], and topological changes [26]. ...
... Fortunately, researchers have intensively explored SEC strategies from different perspectives [12]- [30]. For instance, the problem of minimizing total energy consumption and/or latency of tasks has been investigated through the optimization of offloading decisions, and/or resource allocation under various network restrictions, e.g., computing capacity [12], [14], environmental dynamics [13], [29], [30], and topological changes [26]. Despite these advancements, the aforementioned challenges remain inadequately addressed. ...
... The terrestrial UDs transmit the packets to the satellite via the C band (6GHz) with a system uplink bandwidth of 20MHz while the satellite backhauls via the Ku band (12GHz) with a system downlink bandwidth of 200MHz [15]. The maximum transmit power of each terrestrial UD and the satellite, i.e., p D,max and p S,max are set to 24 dBm and 46 dBm respectively [21], [30]. Similar to that in [12], [29], we set the computing capability z D n and battery capacity E D,max of each terrestrial UD to 0.1 Gcycles/s and 5 mJ respectively. ...
Article
Full-text available
Satellite edge computing (SEC) has emerged as an innovative paradigm for future satellite-terrestrial integrated networks (STINs), expanding computation services by sinking computing capabilities into Low-Earth-Orbit (LEO) satellites. However, the mobility of LEO satellites poses two key challenges to SEC: 1) constrained onboard computing and transmission capabilities caused by limited and dynamic energy supply, and 2) stochastic task arrivals within the satellites' coverage and timevarying channel conditions. To tackle these issues, it is imperative to design an optimal SEC offloading strategy that effectively exploits the available energy of LEO satellites to fulfill competing task demands for SEC. In this paper, we propose a dynamic offloading strategy (DOS) with the aim to minimize the overall completion time of arriving tasks in an SEC-assisted STIN, subject to the long-term energy constraints of the LEO satellite. Leveraging Lyapunov optimization theory, we first convert the original long-term stochastic problem into multiple deterministic one-slot problems parameterized by current system states. Then we use sub-problem decomposition to jointly optimize the task offloading, computing, and communication resource allocation strategies. We theoretically prove that DOS achieves near-optimal performance. Numerical results demonstrate that DOS significantly outperforms the other four baseline approaches in terms of task completion time and dropping rate.
... With the increasing computing power of mobile devices, the deployment of machine learning-based algorithms in satellite resource allocation attracts more attention [29], [30]. Satya Chan at al. [29] proposed a low complexity power and frequency resource allocation method to minimize intercomponent interference while maximizing user throughput. ...
... The scheme has excellent performance while keeping the low complexity of the algorithm. In [30], considering terrestrial users' limited battery capacity and each LEO satellite's computation capability, the authors trained a deep neural network model to minimize the total execution delay of terrestrial users. ...
Preprint
Low earth orbit (LEO) satellite systems play an important role in next generation communication networks due to their ability to provide extensive global coverage with guaranteed communications in remote areas and isolated areas where base stations cannot be cost-efficiently deployed. With the pervasive adoption of LEO satellite systems, especially in the LEO Internet-of-Things (IoT) scenarios, their spectrum resource management requirements have become more complex as a result of massive service requests and high bandwidth demand from terrestrial terminals. For instance, when leasing the spectrum to terrestrial users and controlling the uplink transmit power, satellites collect user data for machine learning purposes, which usually are sensitive information such as location, budget and quality of service (QoS) requirement. To facilitate model training in LEO IoT while preserving the privacy of data, blockchain-driven federated learning (FL) is widely used by leveraging on a fully decentralized architecture. In this paper, we propose a hybrid spectrum pricing and power control framework for LEO IoT by combining blockchain technology and FL. We first design a local deep reinforcement learning algorithm for LEO satellite systems to learn a revenue-maximizing pricing and power control scheme. Then the agents collaborate to form a FL system. We also propose a reputation-based blockchain which is used in the global model aggregation phase of FL. Based on the reputation mechanism, a node is selected for each global training round to perform model aggregation and block generation, which can further enhance the decentralization of the network and guarantee the trust. Simulation tests are conducted to evaluate the performances of the proposed scheme. Our results show the efficiency of finding the maximum revenue scheme for LEO satellite systems while preserving the privacy of each agent.
... Extensive studies have explored various computation offloading schemes with the goal of minimizing the task completion time [9]. Most of the existing works focus on the optimization of users' offloading decisions, such that tasks can be offloaded to lightly-loaded LEO satellites, which thus makes offloaded tasks can be processed more quickly [10]. ...
... Buf m where denotes the size of the node's buffer. Besides, (6) and (7) indicate that the task is processed only on one satellite, (8) and (9) imply that if is not migrated to other satellites, then the communication delay and queue delay introduced by migration are equal to 0. Finally, (10) and (11) are respectively the constraints of the total computation and storage capacity for each satellite . Obviously, to minimize the total delay of all tasks, it is optimal to fully utilize the computation resource of each satellite and reduce the queue delay, the computation delay, and the communication delays as much as possible. ...
Article
By deploying the ubiquitous and reliable coverage of low Earth orbit (LEO) satellite networks using optical inter satellite link (OISL), computation offloading services can be provided for any users without proximal servers, while the resource limitation of both computation and storage on satellites is the important factor affecting the maximum task completion time. In this paper, we study a delay-optimal multi-satellite collaborative computation offloading scheme that allows satellites to actively migrate tasks among themselves by employing the high-speed OISLs, such that tasks with long queuing delay will be served as quickly as possible by utilizing idle computation resources in the neighborhood. To satisfy the delay requirement of delay-sensitive task, we first propose a deadline-aware task scheduling scheme in which a priority model is constructed to sort the order of tasks being served based on its deadline, and then a delay-optimal collaborative offloading scheme is derived such that the tasks which cannot be completed locally can be migrated to other idle satellites. Simulation results demonstrate the effectiveness of our multi-satellite collaborative computation offloading strategy in reducing task complement time and improving resource utilization of the LEO satellite network.
... Tang et al. [12] proposed a hybrid cloud-edge computing scheme that optimizes total energy consumption using the Alternating Direction Method of Multipliers (ADMM) algorithm; however, it only considered a single LEO satellite and did not account for collaborative offloading among multiple LEO satellites. Furthermore, Tang et al.'s subsequent research [13] constructed a three-layer computing framework based on distributed deep learning to address computing offloading issues and reduce execution latency but neglected the potential task dependencies. Within the framework of ground satellite IoT with MEC, Song et al. [14] proposed an energy-efficient computing offloading and resource allocation algorithm aimed at minimizing the total energy consumption of IoT devices, but did not consider the interference management between multiple satellite terminals. ...
Article
Full-text available
Satellite edge computing has garnered significant attention from researchers; however, processing a large volume of tasks within multi-node satellite networks still poses considerable challenges. The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers, making it necessary to implement effective task offloading scheduling to enhance user experience. In this paper, we propose a priority-based task scheduling strategy based on a Software-Defined Network (SDN) framework for satellite-terrestrial integrated networks, which clarifies the execution order of tasks based on their priority. Subsequently, we apply a Dueling-Double Deep Q-Network (DDQN) algorithm enhanced with prioritized experience replay to derive a computation offloading strategy, improving the experience replay mechanism within the Dueling-DDQN framework. Next, we utilize the Deep Deterministic Policy Gradient (DDPG) algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks. Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches, effectively reducing task processing latency and thus improving user experience and system efficiency.
... The first solution entails a fully distributed learning environment entitled: OOSDL [64]. DDLCO [65]. Here, end devices send experiences to the central server via satellites, and model training occurs exclusively on the central server. ...
Article
Full-text available
Efficiently handling huge data amounts and enabling processing-intensive applications to run in faraway areas simultaneously is the ultimate objective of 5G networks. Currently, in order to distribute computing tasks, ongoing studies are exploring the incorporation of fog-cloud servers onto satellites, presenting a promising solution to enhance connectivity in remote areas. Nevertheless, analyzing the copious amounts of data produced by scattered sensors remains a challenging endeavor. The conventional strategy of transmitting this data to a central server for analysis can be costly. In contrast to centralized learning methods, distributed machine learning (ML) provides an alternative approach, albeit with notable drawbacks. This paper addresses the comparative learning expenses of centralized and distributed learning systems to tackle these challenges directly. It proposes the creation of an integrated system that harmoniously merges cloud servers with satellite network structures, leveraging the strengths of each system. This integration could represent a major breakthrough in satellite-based networking technology by streamlining data processing from remote nodes and cutting down on expenses. The core of this approach lies in the adaptive tailoring of learning techniques for individual entities based on their specific contextual nuances. The experimental findings underscore the prowess of the innovative lightweight strategy, LMAED²L (Enhanced Deep Learning for Earth Data Analysis), across a spectrum of machine learning assignments, showcasing remarkable and consistent performance under diverse operational conditions. Through a strategic fusion of centralized and distributed learning frameworks, the LMAED2L method emerges as a dynamic and effective remedy for the intricate data analysis challenges encountered within satellite networks interfaced with cloud servers. The empirical findings reveal a significant performance boost of our novel approach over traditional methods, with an average increase in reward (4.1 %), task completion rate (3.9 %), and delivered packets (3.4 %). This report suggests that these advancements will catalyze the integration of cutting-edge machine learning algorithms within future networks, elevating responsiveness, efficiency, and resource utilization to new heights.
Article
With the expansive deployment of ground base stations, low Earth orbit (LEO) satellites, and aerial platforms such as unmanned aerial vehicles (UAVs) and high altitude platforms (HAPs), the concept of space-air-ground integrated network (SAGIN) has emerged as a promising architecture for future 6G wireless systems. In general, SAGIN aims to amalgamate terrestrial nodes, aerial platforms, and satellites to enhance global coverage and ensure seamless connectivity. Moreover, beyond mere communication functionality, computing capability is increasingly recognized as a critical attribute of sixth generation (6G) networks. To address this, integrated communication and computing have recently been advocated as a viable approach. Additionally, to overcome the technical challenges of complicated systems such as high mobility, unbalanced payloads, limited resources, and various demands in communication and computing among different network segments, various solutions have been introduced recently. Consequently, this paper offers a comprehensive survey of the technological advances in communication and computing within SAGIN for 6G, including the system architecture, network characteristics, general communication, and computing technologies. Subsequently, we summarize the pivotal technologies of SAGIN-enabled 6G, including the physical layer, medium access control (MAC) layer, and network layer. Finally, we explore the technical challenges and future trends in this field.
Article
Satellite communication networks with the characteristics of wide coverage, high deployment flexibility, and seamless communication services can provide communication services to users who don’t communicate with ground networks but directly communicate with satellites. In response to the increasing demand for user services, this paper proposes a collaborative computing offloading scheme for satellite edge computing networks with a four-layer architecture. By utilizing collaborative computing between ground users and three layers of satellites (low-orbit satellites, edge, and cloud data centers), the service quality for ground users is improved. Considering the mobility of vehicles and satellite nodes, the frequent changes in link states further complicate the design and implementation of such systems, leading to increased latency and energy consumption. This paper proposes to optimize the computation offloading decision while satisfying the constraint of satellite computing capabilities, aiming to improve the success rate of tasks and minimize the overall cost of the system. However, with the increase in the number of ground users and satellites, the formulated problem becomes a mixed-integer nonlinear programming (MINLP) problem, which is difficult to solve with general optimization algorithms. To address this issue, this paper proposes a dynamic distributed learning offloading (DDLDO) algorithm based on distributed deep learning. The algorithm utilizes multiple parallel deep neural networks (DNN) to dynamically learn computation offloading strategies. Simulation results demonstrate that the algorithm outperforms other benchmark algorithms in terms of latency, energy consumption, and successful execution efficiency.
Article
Satellite edge computing, as an extension of ground edge computing, is a key technology for providing computing services by deploying resources on low earth orbit (LEO) satellites. However, the temporal and spatial differences in population density and economic levels may lead to unbalanced computation workloads on LEO satellites. Considering the mobility and limited resources inherent in LEO satellites, effectively utilizing the LEO satellite network to meet global competitive demands for task offloading becomes challenging. Therefore, in this paper, we propose an adaptive task offloading approach with spatiotemporal load awareness, named ATO-SLA , in satellite edge computing, aiming to optimize users' perceived-delay and energy consumption. Specifically, to avoid LEO satellite overload, we first introduce the spatiotemporal load factor, formally modelling the spatiotemporal load-aware task offloading problem. Then, the Markov decision process is employed to structure the task offloading decision process. Afterwards, we propose a task offloading algorithm based on proximal policy optimization strategy to adaptively solve the problem. Finally, experimental results demonstrate that ATO-SLA achieves a lower average delay and average energy consumption compared with other approaches.
Article
Full-text available
In this paper, we propose a novel wireless scheme that integrates satellite, airborne, and terrestrial networks aiming to support ground users. More specifically, we study the enhancement of the achievable users' throughput assisted with terrestrial base stations, high-altitude platforms (HAPs), and satellite stations. The goal is to optimize the resource allocations and the HAPs' locations in order to maximize the users' throughput. In this context, we formulate and solve an optimization problem in two stages: a short-term stage and a long-term stage. In the short-term stage, we start by proposing an approximated solution and a low complexity solution to solve the associations and power allocations. In the approximated solution, we formulate and solve a binary linear optimization problem to find the best associations and then we use the Taylor expansion approximation to optimally determine the power allocations. In the latter solution, we propose a low complexity approach based on a frequency partitioning technique to solve the associations and power allocations. On the other hand, in the long-term stage, we optimize the locations of the HAPs by proposing an efficient algorithm based on a recursive shrink-and-realign process. Finally, selected numerical results underline the advantages provided by our proposed optimization scheme.
Article
For the mobile edge computing network consisting of multiple base stations and resource-constrained user devices, network cost in terms of energy and delay will incur during task offoading from the user to the edge server. With the limitations imposed on transmission capacity, computing resource, and connection capacity, the perslot online learning algorithm is first proposed to minimize the time-averaged network cost. In particular, by leveraging the theories of stochastic gradient descent and minimum cost maximum flow, the user association is jointly optimized with resource scheduling in each time slot. The theoretical analysis proves that the proposed approach can achieve asymptotic optimality without any prior knowledge of the network environment. Moreover, to alleviate the high network overhead incurred during user handover and task migration, a two-timescale optimization approach is proposed to avoid frequent changes in user association. With user association executed on a large time scale and the resource scheduling decided on the single time slot, the a symptotic optimality is preserved. Simulation results verify the effectiveness of the proposed online learning algorithms.
Article
In this paper, the problem of computation offloading in the edge server is studied in a mobile edge computation (MEC)-enabled cell networks that consists of a base station (BS) integrating edge servers, several terminal devices and collaborators. In the considered networks, we develop an intelligent task offloading and collaborative computation scheme to achieve the optimal computation offloading. First, a distance-based collaborator screening method is proposed to get collaborators within the distance threshold and with high power. Second, based on the Lyapunov stochastic optimization theory, the system stability problem is transformed into a queue stability issue, and the optimal computation offloading is obtained by solving these three sub-problems: task allocation control, task execution control and queue update, respectively. Moreover, rigorous experimental simulation shows that our proposed computation offloading algorithm can achieve the joint optimization among the system efficiency, energy consumption and time delay compared to the mobility-aware and migration-enabled approach, Full BS and Full local.
Article
Low earth orbit (LEO) satellite networks can break through geographical restrictions and achieve global wireless coverage, which is an indispensable choice for future mobile communication systems. In this paper, we present a hybrid cloud and edge computing LEO satellite (CECLS) network with a three-tier computation architecture, which can provide ground users with heterogeneous computation resources and enable ground users to obtain computation services around the world. With the CECLS architecture, we investigate the computation offloading decisions to minimize the sum energy consumption of ground users, while satisfying the constraints in terms of the coverage time and the computation capability of each LEO satellite. The considered problem leads to a discrete and non-convex since the objective function and constraints contain binary variables, which makes it difficult to solve. To address this challenging problem, we convert the original non-convex problem into a linear programming problem by using the binary variables relaxation method. Then, we propose a distributed algorithm by leveraging the alternating direction method of multipliers (ADMM) to approximate the optimal solution with low computational complexity. Simulation results show that the proposed algorithm can effectively reduce the total energy consumption of ground users.
Article
The integration of satellite communications into 5G ecosystem is pivotal to boost enhanced mobile broadband (eMBB) services in highly dynamic scenarios and in areas not optimally supported by terrestrial infrastructures. Given the heterogeneity of the networks involved, network slicing is key networking paradigm to ensure different grades of quality of service (QoS) based on the users' and verticals' requirements. In this light, this paper proposes an optimisation framework able to exploit the available resources allocated to the defined network slices so as to meet the diverse QoS/QoE requirements exposed by the network actors. Resource allocation schemes built upon neural network algorithms are validated through extensive simulation campaigns that have shown the superiority of the proposed concepts with respect to other solution candidates available from the literature.
Article
The satellite–terrestrial integrated networks (STINs) have gradually become a new class of effective ways to satisfy the requirements of a higher capacity and stronger connection in the future communications. In contrast with terrestrial networks, the fast periodic motion of satellites results in the dynamic time-varying features of STIN, which further leads to frequent changes in the connectivity of satellite–terrestrial links and the backhaul capacities of satellite networks. To balance the accessible capacity of STIN under the intermittent connectivity and dynamic backhaul capacity, an effective user association mechanism is needed. In this article, a dynamic user association (DUA) mechanism with task classification is proposed to meet the requirements of load balancing and the user task processing. First, a STIN model is constructed with low earth orbit satellites and the three types of base station, which are a macro base station, small cell base station, and low earth orbit based base station. After that, the optimization problem is formulated via jointly considering the task classification, the load condition of base stations, and the backhaul capacity of low earth orbit based base stations. Then, the DUA mechanism is proposed to find the most suitable base station serving each user. In DUA, a dynamic cell range extension algorithm is developed to adjust the load of STIN in terms of the resilient backhaul capacity, and a greedy-based user-centric user association with task classification algorithm is proposed to find the base station, which has the maximum rate and minimum load for each user and to meet the requirements of user task processing. The simulation results show that the proposed DUA can enhance the load balance and guarantee the task processing demand of STIN compared with the reference signal receiving power association and the max-sum rate association algorithms.
Article
Due to the rapid development of the maritime networks, there has been a growing demand for computation-intensive applications which have various energy consumption, transmission bandwidth and computing latency requirements. Mobile edge computing (MEC) can efficiently minimize computational latency by offloading computation tasks by the terrestrial access network. In this work, we introduce a space-air-ground-sea integrated network architecture with edge and cloud computing components to provide flexible hybrid computing service for maritime service. In the integrated network, satellites and unmanned aerial vehicles (UAVs) provide the users with edge computing services and network access. Based on the architecture, the joint communication and computation resource allocation problem is modelled as a complex decision process, and a deep reinforcement learning based solution is designed to solve the complex optimization problem. Finally, numerical results verify that the proposed approach can improve the communication and computing efficiency greatly.
Article
STN has been considered a novel network architecture to accommodate a variety of services and applications in future networks. Being a promising paradigm, MEC has been regarded as a key technology-enabler to offer further service innovation and business agility in STN. However, most of the existing research in MEC enabled STN regards a satellite network as a relay network, and the feasibility of tasks processing directly on the satellites is largely ignored. Moreover, the problem of multi-layer edge computing architecture design and heterogeneous edge computing resource co-scheduling, have not been fully considered. Therefore, different from previous works, in this article, we propose a novel architecture named STECN, in which computing resources exist in multi-layer heterogeneous edge computing clusters. The detailed functional components of the proposed STECN are discussed, and we present the promising technical challenges, including meeting QoE requirements, cooperative computation offloading, multi-node task scheduling, mobility management and fault/failure recovery. Finally, some potential research issues for future research are highlighted.