Article

Stochastic Computation Offloading for LEO Satellite Edge Computing Networks: A Learning-Based Approach

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The deployment of mobile edge computing services in LEO satellite networks achieves seamless coverage of computing services. However, the time-varying wireless channel conditions between satellite-terrestrial channels and the random arrival characteristics of ground users’ tasks bring new challenges for managing the LEO satellite’s communication and computing resources. Facing these challenges, a stochastic computation offloading problem of joint optimizing communication and computing resources allocation and computation offloading decisions is formulated for minimizing the long-term average total power cost of the ground users and the LEO satellite, with the constraint of long-term task queue stability. However, the computing resource allocation and the computation offloading decisions are coupled within different slots, thus making it challenging to address this problem. To this end, we first employ Lyapunov optimization to decouple the long-term stochastic computation offloading problem into the deterministic subproblem in each slot. Then, an online algorithm combining deep reinforcement learning and conventional optimization algorithms is proposed to solve these subproblems. Simulation results show that the proposed algorithm can achieve the superior performance while ensuring the stability of all task queues in LEO satellite networks.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... With a common goal of energy optimization, the authors of [19]- [21] all adopt alternative optimization for multi-satellite edge computing to optimize users' association, power control, task scheduling, and computing resource allocation. To optimize long-term power cost, a Lyapunov optimization method is proposed in [22] considering the timevarying satellite-terrestrial channel conditions and the random task arrivals. With a concern on privacy protection [31], [32], a blockchain-aided Stackelberg game model is proposed in [23] for maximizing privacy overhead and network throughput by a Lyapunov optimization-based meta-learning method. ...
... Nevertheless, little attention has been paid to the time-varying energy evolution of satellites yet, which is indeed crucial in STINs as stated. [14], [16] ✗ ✗ ✗ Convex optimization [17] ✗ ✗ ✗ MARL [15] ✗ ✗ ✓ RL (PPO) [18]- [21] ✓ ✗ ✗ Convex optimization [22] ✓ ✓ ✗ Lyapunov optimization + convex optimization [24] ✓ ✗ ✓ RL (DDPG) [26] ✓ ✗ ✓ Convex optimization [27] ✓ ✗ ✓ Lyapunov optimization + delayed online learning [28] ✓ ✗ ✓ Lyapunov optimization + delayed online learning Our work ✓ ✓ ✓ Lyapunov optimization + convex optimization ...
... 2) Online methods: Several studies have also explored online approaches, which are generally developed based on the Lyapunov optimization [22], [27], [28] or RL [15], [17], [24]. For instance, the Lyapunov framework integrated with delayed online learning is utilized in [28] for optimizing multihop satellite peer offloading with uncertain future workloads. ...
Article
Full-text available
Satellite edge computing (SEC) has emerged as an innovative paradigm for future satellite-terrestrial integrated networks (STINs), expanding computation services by sinking computing capabilities into Low-Earth-Orbit (LEO) satellites. However, the mobility of LEO satellites poses two key challenges to SEC: 1) constrained onboard computing and transmission capabilities caused by limited and dynamic energy supply, and 2) stochastic task arrivals within the satellites' coverage and timevarying channel conditions. To tackle these issues, it is imperative to design an optimal SEC offloading strategy that effectively exploits the available energy of LEO satellites to fulfill competing task demands for SEC. In this paper, we propose a dynamic offloading strategy (DOS) with the aim to minimize the overall completion time of arriving tasks in an SEC-assisted STIN, subject to the long-term energy constraints of the LEO satellite. Leveraging Lyapunov optimization theory, we first convert the original long-term stochastic problem into multiple deterministic one-slot problems parameterized by current system states. Then we use sub-problem decomposition to jointly optimize the task offloading, computing, and communication resource allocation strategies. We theoretically prove that DOS achieves near-optimal performance. Numerical results demonstrate that DOS significantly outperforms the other four baseline approaches in terms of task completion time and dropping rate.
... In [31], the authors considered utilizing a single LEO satellite with MEC equipment for computation offloading. Specifically, ground users process their task data locally or offload the data to the MEC-enabled satellite, where the to-be-processed data wait in task queues. ...
... Computation offloading [31] An LEO satellite Energy Propose a joint offloading decision and resource allocation scheme based on Lyapunov optimization and deep reinforcement learning. ...
Article
Full-text available
The sixth-generation (6G) network is envisioned to shift its focus from the service requirements of human beings to those of Internet-of-Things (IoT) devices. Satellite communications are indispensable in 6G to support IoT devices operating in rural or disaster areas. However, satellite networks face the inherent challenges of low data rate and large latency, which may not support computation-intensive and delay-sensitive IoT applications. Mobile Edge Computing (MEC) is a burgeoning paradigm by extending cloud computing capabilities to the network edge. Using MEC technologies, the resource-limited IoT devices can access abundant computation resources with low latency, which enables the highly demanding applications while meeting strict delay requirements. Therefore, an integration of satellite communications and MEC technologies is necessary to better enable 6G IoT. In this survey, we provide a holistic overview of satellite-MEC integration. We first categorize the related studies based on three minimal structures and summarize current advances. For each minimal structure, we discuss the lessons learned and possible future directions. We also summarize studies considering the combination of minimal structures. Finally, we outline potential research issues to envision a more intelligent, more secure, and greener integrated satellite-MEC network.
... The authors of [29] designed an architecture for LEO edge-computing satellites supporting IoT devices and proposed a low-complexity offloading and scheduling algorithm. In [30], an online algorithm was introduced for resource allocation and offloading decisions to minimize power consumption for both end users and LEO satellites. In [31], a system model based on the Stackelberg game was designed for situations involving LEO satellite networks and large-scale end users, along with an offloading decision algorithm for end users based on the mean-field game. ...
Article
Full-text available
The combination of software-defined networking (SDN) and satellite–ground integrated networks (SGINs) is gaining attention as a key infrastructure for meeting the granular quality-of-service (QoS) demands of next-generation mobile communications. However, due to the unpredictable nature of end-user requests and the limited resource capacity of low Earth orbit (LEO) satellites, improper Virtual Network Function (VNF) deployment can lead to significant increases in end-to-end (E2E) delay. To address this challenge, we propose an online algorithm that jointly deploys VNFs and forms routing paths in an event-driven manner in response to end-user requests. The proposed algorithm selectively deploys only the essential VNFs required for each Service Function Chain (SFC), focusing on minimizing E2E delay—a critical QoS parameter. By defining a minimum-hop region (MHR) based on the geographic coordinates of the routing endpoints, we reduce the search space for candidate base stations, thereby designing paths that minimize propagation delays. VNFs are then deployed along these paths to further reduce E2E delay. Simulations demonstrate that the proposed algorithm closely approximates the global optimum, achieving up to 97% similarity in both E2E delay and CPU power consumption, with an average similarity of approximately 90%.
... Sample N mini-batch (ψ t , Ω t , R t , ψ t+1 ) from B. 13 Calculate Q target using (17). 14 Update critic loss using (18). 15 if t mod d = 0 then 16 Update actor network using (19). ...
Article
Full-text available
The Internet of Things (IoT) system provides sensing and computing services via terrestrial networks. However, the restricted coverage of terrestrial networks, such as base stations, limits the ubiquitous IoT services. Low Earth Orbit (LEO) satellites are able to provide network coverage for terrestrial IoT devices in unconnected scenarios, e.g., maritime. IoT devices in such scenarios usually have restricted onboard computation and power resources. In this paper, we present an LEO-assisted IoT Network (L-IoT) architecture where a device splits its task and offloads a portion of its task to the LEO to process within the coverage time. We formulate a task Split problem with Communication and Computation resource allocation (SCC) to minimize the L-IoT energy consumption. We proposed an Alternating Optimization for Split ratio and Resource allocation (AOSR) algorithm. In particular, we use the outputs of Karush-Kuhn-Tucker (KKT) for resource allocation as part of the reward that feeds TD3. Lastly, the results of numerical simulations show that the proposed AOSR approach reduces 12.7% energy consumption compared to Soft Actor-Critic (SAC) and 15% to Deep Deterministic Policy Gradient (DDPG).
Article
As a supplement to terrestrial communication networks, satellite edge computing can break through geographical limitations and provide on-orbit computing services for people in some remote areas to achieve truly seamless global coverage. Considering time-varying channels, queue delays, and dynamic loads of edge computing satellites, we propose a multi-agent task offloading and resource allocation (MATORA) algorithm with weighted latency as the optimization goal. It is a mixed integer nonlinear problem decoupled into task offloading and resource allocation sub-problems. For the offloading sub-problem, we propose a distributed multi-agent deep reinforcement learning algorithm, and each agent generates its own offloading decision without knowing the prior knowledge of others. We show that the resource allocation problem is convex and can be solved using convex optimization methods. The experiment shows that the proposed algorithm can better adapt to the change of channel and the dynamic load of edge computing satellite, and it can effectively reduce task latency and task drop rate.
Article
Computing offloading optimization for energy saving is becoming increasingly important in low-Earth orbit (LEO) satellite-terrestrial integrated networks (STINs) since battery techniques has not kept up with the demand of ground terminal devices. In this paper, we design a delay-based deep reinforcement learning (DRL) framework specifically for computation offloading decisions, which can effectively reduce the energy consumption. Additionally, we develop a multi-level feedback queue for computing allocation (RAMLFQ), which can effectively enhance the CPU's efficiency in task scheduling. We initially formulate the computation offloading problem with the system delay as Delay Markov Decision Processes (DMDPs), and then transform them into the equivalent standard Markov Decision Processes (MDPs). To solve the optimization problem effectively, we employ a double deep Q-network (DDQN) method, enhancing it with an augmented state space to better handle the unique challenges posed by system delays. Simulation results demonstrate that the proposed learning-based computing offloading algorithm achieves high levels of performance efficiency and attains a lower total cost compared to other existing offloading methods.
Article
Integrating terrestrial and non-terrestrial networks can provide a wide area of Internet of Things (IoT) services with global connections and ubiquitous communications. However, integrated terrestrial and non-terrestrial networks (ITNTNs) also face huge challenges caused by complex environments, diverse services, and heterogeneous nodes. By leveraging the powerful driving force of edge intelligence (EI) technology for network development, a framework named “ITNTNs with EI” is proposed in this paper as the solution to the challenges above. We design system architecture, dynamic edge resource deployment, and intelligent edge model training for the proposed framework and discuss its application scenarios, challenges, and some open research issues. Then, simulation experiments about a joint computation offloading and resource allocation optimization algorithm based on the DRL-based model are conducted for a specific framework instance. The results demonstrate that optimization management solutions driven by EI can provide faster response and greater quality of service for IoT applications in ITNTNs.
Article
Edge computing is an efficient way to offload computational tasks for user equipment (UE) which has computation-intensive and latency-sensitive tasks in certain applications. However, UEs can not offload to ground edge servers when they are in remote areas. Mounting edge servers on low earth orbit (LEO) satellites can provide remote UEs with task offloading when the ground infrastructure is not available. In this paper, we introduce a multi-satellite-enabled edge computing system for offloading UEs’ computational tasks with the aim of minimizing system energy consumption by optimizing users’ association, power control, task scheduling, and computing resource allocation. Specifically, a UE’s partial task is executed locally and the rest of its task is offloaded to a satellite for processing. Such energy minimization problem is formulated as a mixed-integer nonlinear programming (MINLP) optimization problem. By decomposing the original problem into four sub-problems, we solve each sub-problem with convex optimization methods. In addition, an iterative algorithm is proposed to jointly optimize the task offloading and resource allocation strategy, which achieves a near optimal solution through several iterations. Finally, the complexity and convergence of the algorithm are verified. In our simulation results, the proposed algorithm is compared with different task offloading and resource allocation schemes in terms of system energy consumption, where 43% energy is saved.
Article
Full-text available
Space-air-ground integrated edge computing is expecting to provide pervasive computation services for Internet of Things, especially in remote areas. However, the offloading process of power-limited IoT devices is a challenge issue due to unreliable communications in aerial environment. In this paper, we propose an energy efficient space-air-ground integrated edge computing network architecture, in which the IoT devices choose the most appropriate LEO satellites or unmanned aerial vehicles (UAVs) for task offloading according to their energy level, communication conditions and computing capabilities. In order to providing efficient task offloading and energy saving policy under uncertainty aerial environment, a constrained Markov decision process is employed to formulate the task offloading decision problem and a deep reinforcement learning (DRL)-based algorithm is devised to solve the proposed problem. An adaptive federated DRL-based offloading method is further proposed to find sub-optimal offloading decisions by considering the privacy protection and communication failure in the proposed network. Numerical results confirm the effectiveness of the proposed schemes on energy saving and computation efficiency.
Article
Full-text available
Low earth orbit (LEO) satellite network is an important development trend for future mobile communication systems, which can truly realize the 'ubiquitous connection' of the whole world. In this paper, we present a cooperative computation offloading in the LEO satellite network with a three-tier computation architecture by leveraging the vertical cooperation among ground users, LEO satellites, and the cloud server, and the horizontal cooperation between LEO satellites. To improve the quality of service for ground users, we optimize the computation offloading decisions to minimize the total execution delay for ground users subject to the limited battery capacity of ground users and the computation capability of each LEO satellite. However, the formulated problem is a large-scale nonlinear integer programming problem as the number of ground users and LEO satellites increases, which is difficult to solve with general optimization algorithms. To address this challenging problem, we propose a distributed deep learning-based cooperative computation offloading (DDLCCO) algorithm, where multiple parallel deep neural networks (DNNs) are adopted to learn the computation offloading strategy dynamically. Simulation results show that the proposed algorithm can achieve near-optimal performance with low computational complexity compared with other computation offloading strategies.
Article
Full-text available
While the 5-th generation of communication networks (5G) is taking its first deployment steps, high doubts still concern the fulfilment of the stringent requirements of its slices. To meet these complex requirements, robust access networks should support the current 5G air interface. In this regard, space and air networks, namely satellites and unmanned aerial vehicles (UAVs) are expected to play a key role due to wide coverage and flexible deployment, respectively. Currently, the integration of these platforms in the terrestrial networks is weak though. Therefore, we suggest bridging this gap by designing a heterogeneous traffic offloading approach in the space-air-ground integrated network (SAGIN). Our innovative offloading approach covers the co-existing requirements of two heterogeneous slices of 5G by offloading smartly the traffic to the appropriate segment of SAGIN. Specifically, the ultra-reliable low-latency communications (URLLC) traffic is offloaded to the UAV link and to the terrestrial link to satisfy its stringent requirements in terms of latency. However, the enhanced mobile broadband (eMBB) traffic is offloaded to the UAV link, to the terrestrial link and to the satellite link because it is less sensitive to delay but needs high data rates. Our offloading approach boosts the network’s availability and reduces the latency experienced in SAGIN through an efficient resource allocation and an optimized design of the UAVs trajectory. Our findings highlight the key role that the concrete integration between SAGIN segments plays to achieve a better quality of service (QoS) for different slices with heterogeneous requirements.
Article
Full-text available
The rise of NewSpace provides a platform for small and medium businesses to commercially launch and operate satellites in space. In contrast to traditional satellites, NewSpace provides the opportunity for delivering computing platforms in space. However, computational resources within space are usually expensive and satellites may not be able to compute all computational tasks locally. Computation Offloading (CO), a popular practice in Edge/Fog computing, could prove effective in saving energy and time in this resource-limited space ecosystem. However, CO alters the threat and risk profile of the system. In this paper we analyse security issues in space systems and propose a security-aware algorithm for CO. Our method is based on the reinforcement learning technique, Deep Deterministic Policy Gradient (DDPG). We show, using Monte-Carlo simulations, that our algorithm is effective under a variety of environment and network conditions and provide novel insights into the challenge of optimised location of computation.
Article
Full-text available
Opportunistic computation offloading is an effective method to improve the computation performance of mobile-edge computing (MEC) networks under dynamic edge environment. In this paper, we consider a multi-user MEC network with time-varying wireless channels and stochastic user task data arrivals in sequential time frames. In particular, we aim to design an online computation offloading algorithm to maximize the network data processing capability subject to the long-term data queue stability and average power constraints. The online algorithm is practical in the sense that the decisions for each time frame are made without the assumption of knowing the future realizations of random channel conditions and data arrivals. We formulate the problem as a multi-stage stochastic mixed integer non-linear programming (MINLP) problem that jointly determines the binary offloading (each user computes the task either locally or at the edge server) and system resource allocation decisions in sequential time frames. To address the coupling in the decisions of different time frames, we propose a novel framework, named LyDROO, that combines the advantages of Lyapunov optimization and deep reinforcement learning (DRL). Specifically, LyDROO first applies Lyapunov optimization to decouple the multi-stage stochastic MINLP into deterministic per-frame MINLP subproblems. By doing so, it guarantees to satisfy all the long-term constraints by solving the per-frame subproblems that are much smaller in size. Then, LyDROO integrates model-based optimization and model-free DRL to solve the per-frame MINLP problems with very low computational complexity. Simulation results show that under various network setups, the proposed LyDROO achieves optimal computation performance while stabilizing all queues in the system. Besides, it induces very low computation time that is particularly suitable for real-time implementation in fast fading environments.
Article
Full-text available
The recent advances in low earth orbit (LEO) satellites enable the satellites to provide task processing capability for remote Internet of Things (IoT) mobile devices (IMDs) without proximal multi-access edge computing (MEC) servers. In this article, by leveraging the LEO satellites, a novel MEC framework for terrestrial-satellite IoT is proposed. With the aid of terrestrial-satellite terminal (TST), the computation offloading from IMDs to LEO satellites is divided into two stages in the ground and space segments. In order to minimize the weighted sum energy consumption of IMDs, we decompose the formulated problem into two layered subproblems: 1) the lower layer subproblem minimizing the latency of space segment, which is solved by sequential fractional programming with attaining the first-order optimality; 2) the upper layer subproblem which is solved by exploiting the convex structure and applying Lagrangian dual decomposition method. Based on the solutions to the two layered subproblems, an energy efficient computation offloading and resource allocation algorithm (E-CORA) is proposed. By simulations, it is shown that: i) there exists a specific amount of offloading bits which can minimize the energy consumption of IMDs and the proposed E-CORA outperforms full offloading and local computing only; ii) larger transmit power of the TST helps to save the energy of IMDs; and iii) by increasing the number of visible satellites, the ratio of offloading bits increases while the energy consumption of IMDs can be decreased.
Article
Full-text available
Wireless powered mobile-edge computing (MEC) has recently emerged as a promising paradigm to enhance the data processing capability of low-power networks, such as wireless sensor networks and internet of things (IoT). In this paper, we consider a wireless powered MEC network that adopts a binary offloading policy, so that each computation task of wireless devices (WDs) is either executed locally or fully offloaded to an MEC server. Our goal is to acquire an online algorithm that optimally adapts task offloading decisions and wireless resource allocations to the time-varying wireless channel conditions. This requires quickly solving hard combinatorial optimization problems within the channel coherence time, which is hardly achievable with conventional numerical optimization methods. To tackle this problem, we propose a Deep Reinforcement learning-based Online Offloading (DROO) framework that implements a deep neural network as a scalable solution that learns the binary offloading decisions from the experience. It eliminates the need of solving combinatorial optimization problems, and thus greatly reduces the computational complexity especially in large-size networks. To further reduce the complexity, we propose an adaptive procedure that automatically adjusts the parameters of the DROO algorithm on the fly. Numerical results show that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computation time by more than an order of magnitude compared with existing optimization methods. For example, the CPU execution latency of DROO is less than 0.1 second in a 30-user network, making real-time and optimal offloading truly viable even in a fast fading environment.
Conference Paper
Full-text available
Mobile edge computing (MEC) can efficiently minimize computational latency, reduce response time, and improve quality-of-service (QoS) by offloading tasks in the access network. Although lots of edge computation offloading schemes have been proposed in terrestrial networks, the hybrid satellite terrestrial communication, as an emerging trend for the next generation communication, has not taken edge computing into consideration. In this paper, a novel satellite terrestrial network with double edge computing is introduced to reap the benefits of providing computing service for remote areas. A strategy is designed to solve the problem of efficiently scheduling the edge servers distributed in the satellite terrestrial networks to provide more powerful edge computing services. For the purpose of allocating satellite edge computing resource efficiently in the strategy, a double edge computation offloading algorithm is proposed to optimize energy consumption and reduce latency by assigning tasks to edge servers with minimal cost. Numerical results verify that the proposed algorithm can reduce the average latency and the energy consumption by approximately 45% and 49% respectively. The study on the number selection of utilized satellite edge servers provides an insight for following studies of edge servers scheduling in satellite terrestrial networks.
Article
Full-text available
Mobile edge computing (MEC) and wireless power transfer (WPT) are two promising techniques to enhance the computation capability and to prolong the operational time of low-power wireless devices that are ubiquitous in Internet of Things. However, the computation performance and the harvested energy are significantly impacted by the severe propagation loss. In order to address this issue, an unmanned aerial vehicle (UAV)-enabled MEC wireless powered system is studied in this paper. The computation rate maximization problems in a UAVenabled MEC wireless powered system are investigated under both partial and binary computation offloading modes, subject to the energy harvesting causal constraint and the UAV’s speed constraint. These problems are non-convex and challenging to solve. A two-stage algorithm and a three-stage alternative algorithm are respectively proposed for solving the formulated problems. The closed-form expressions for the optimal central processing unit frequencies, user offloading time, and user transmit power are derived. The optimal selection scheme on whether users choose to locally compute or offload computation tasks is proposed for the binary computation offloading mode. Simulation results show that our proposed resource allocation schemes outperforms other benchmark schemes. The results also demonstrate that the proposed schemes converge fast and have low computational complexity.
Article
Full-text available
While augment reality applications are become popular, more and more data-hungry and computation-intensive tasks are delay-sensitive. Mobile edge computing is expected to an effective solution to meet the low latency demand. In contrast to previous work on mobile edge computing, which mainly focus on computation offloading, this paper introduces a new concept of task caching. Task caching refers to the caching of completed task application and their related data in edge cloud. Then, we investigate the problem of joint optimization of task caching and offloading on edge cloud with the computing and storage resource constraint. We formulate this problem as mixed integer programming which is hard to solve. To solve the problem, we propose efficient algorithm, called Task caching and offloading (TCO), based on alternating iterative algorithm. Finally, the simulation experimental results show that our proposed TCO algorithm outperforms others in terms of less energy cost.
Book
Full-text available
The various types of satellite and projectile orbits are described and related to the conditions, in terms of energy and angular momentum, under which the ideal orbits occur. The transient nature of the initial and final part of a satellite's path and the conditions necessary for placing a satellite in a given orbit are discussed.
Article
Marine Internet of Things (IoT) systems have grown substantially with the development of non-terrestrial networks (NTN) via aerial and space vehicles in the upcoming sixth-generation (6G), thereby assisting environment protection, military reconnaissance and sea transportation. Due to the unpredictable climate changes and the extreme channel conditions of maritime networks, however, it is challenging to efficiently and reliably collect and compute a huge amount of maritime data. In this paper, we propose a hybrid low-Earth orbit (LEO) and unmanned aerial vehicle (UAV) edge computing method in space-air-sea integrated networks for marine IoT systems. Specifically, two types of edge servers mounted on UAVs and LEO satellites are endowed with computational capabilities for the real-time utilization of a sizable data collected from ocean IoT sensors. Our system aims at minimizing the total energy consumption of the battery-constrained UAV by jointly optimizing the bit allocation of communication and computation along with the UAV path planning under latency, energy budget and operational constraints. For availability and practicality, the proposed methods are developed for three different cases according to the accessibility of the LEO satellite“, Always On”, “Always Off” and “Intermediate Disconnected”, by leveraging successive convex approximation (SCA) strategies. Via numerical results, we verify that significant energy savings can be accrued for all cases of LEO accessibility by means of joint optimization of bit allocation and UAV path planning compared to partial optimization schemes that design for only the bit allocation or trajectory of the UAV.
Article
Low earth orbit (LEO) satellite networks have become one of the hot research areas as an essential part of satellite communication networks. The dynamic topology and unbalanced traffic demand may lead to inter-satellite link congestion, thus, improving network load balancing performance is one of the key issues to be addressed in LEO satellite networks. We propose a load-balanced collaborative offloading (LBCO) strategy to achieve a balanced traffic distribution in LEO satellite networks. LBCO strategy consists of two algorithms, namely, channel-aware gradient fair association (CAGFA) algorithm and inter-satellite links collaborative offloading (ISLCO) algorithm. The CAGFA algorithm aims to maximize the aggregate weighted utility, and the ISLCO algorithm aims to achieve the traffic offloading and download observation data from the LEO satellite network. Specifically, we first determine the actual downloading satellite set and neighboring satellite set by constructing an earth station (ES) time-share graph and a space-time topology graph. Then, the LBCO strategy uses the CAGFA algorithm to obtain the optimal satellite terminal association indicator and the load of downloading satellites. Finally, the ISLCO algorithm is proposed to achieve proportional offloading of traffic among the neighboring satellites and download massive observation data. Simulations show that the proposed CAGFA algorithm improves the weighted utility by 3.3% and the convergence by 47.6% compared with the benchmark stochastic gradient descent-based association (SGDA) algorithm. We also validate the performance of the LBCO strategy by data download throughput, which performs better than other benchmark algorithms under three different load scenarios.
Article
Enabling a satellite network with edge computing capabilities can complement the advantages further of a single terrestrial network and provide users with a full range of computing service. Satellite edge computing is a potentially indispensable technology for the future satellite-terrestrial integrated networks. In this paper, a three-tier edge computing architecture consisting of terminal-satellite-cloud is proposed, where tasks can be processed at three planes and inter-satellites can cooperate to achieve on-board load balancing. Facing varying and random task queues with different service requirements, we formulate the objective problem of minimizing the system energy consumption under the delay and resource constraints, and jointly optimize the offloading decision, communication and computing resource allocation variables. Moreover, the distribution of resources is based on the reservation mechanism to ensure the stability of satellite-terrestrial link and the reliability of computation process. To adapt to the dynamic environment, we propose an intelligent computation offloading scheme based on the deep deterministic policy gradient (DDPG) algorithm, which consists of several different deep neural networks (DNN) to output both discrete and continuous variables. Additionally, by setting the selection process of legal actions, the simultaneous decisions on offloading locations and allocating resources under multi-task concurrency is realized. The simulation results show that the proposed scheme can effectively reduce the total energy consumption of the system by ensuring that the task is completed on demand, and outperform the benchmark algorithms.
Article
Sixth-Generation (6G) technologies will revolutionize the wireless ecosystem by enabling the delivery of futuristic services through satellite-terrestrial integrated networks (STINs). As the number of subscribers connected to STINs increases, it becomes necessary to investigate whether the edge computing paradigm may be applied to low Earth orbit satellite (LEOS) networks for supporting computation-intensive and delay-sensitive services for anyone, anywhere, and at any time. Inspired by this research dilemma, we investigate a LEOS edge-assisted multilayer multi-access edge computing (MEC) system. In this system, the MEC philosophy will be extended to LEOS, for defining the LEOS edge, in order to enhance the coverage of the multi-layer MEC system and address the users’ computing problems both in congested and isolated areas. We then design its operating offloading framework and explore its feasible implementation methodologies. In this context, we formulate a joint optimization problem for the associated communication and computation resource allocation for minimizing the overall energy dissipation of our LEOS edge-assisted multi-layer MEC system while maintaining a low computing latency. To solve the optimization problem effectively, we adopt the classic alternating optimization (AO) method for decomposing the original problem and then solve each sub-problem using low-complexity iterative algorithms. Finally, our numerical results show that the offloading scheme conceived achieves low computing latency and energy dissipation compared to the state-of-the-art solutions, a single layer MEC supported by LEOS or base stations (BS).
Article
The low earth orbit (LEO) satellite network is regarded as a promising technology for delivering seamless service to remote areas such as rural areas. In this paper, we consider a dynamic data offloading problem in ultra-dense LEO satellites networks with large-scale ground users, where each ground user makes the distributed offloading decision based on its state information, the influence from other ground users, and the fees paid to satellites. To investigate the interaction problem between ground users and satellites, we formulate the problem as a Stackelberg game. Specifically, the satellites serve as the leaders, who decide the data service price at each time slot. The ground users, on the contrary, are the followers who decide the power control based on their states, the influence from other ground users and the fees paid to satellites. Since the influence is difficult to estimate due to the large number of ground users, we employ the mean field game algorithm to transform the influence from others and satellites into the mean field term, and reformulate the optimization problem as a Stackelberg mean field game (SMFG). Each ground user makes the data offloading decision by learning the future impact of the whole network distributively. For ground users, we solve the power control optimization problem by utilizing the G-prox primal-dual hybrid gradient (PDHG) algorithm, where the Fokker-Planck-Kolmogorov (FPK) equation is converted into a linear form via the Taylor expansion. For satellites, we address the service pricing optimization problem by using the adjoint algorithm. Finally, the numerical results demonstrate the effectiveness of the proposed algorithm.
Article
Benefiting from the development of satellite onboard processing capability, the orbital computing can be realized by deploying edge computing servers on satellites to reduce the task processing latency. However, edge computing based on geostationary earth orbit (GEO) or low earth orbit (LEO) alone can hardly meet the latency requirements of satellite assisted internet of things (SIoT) services. Moreover, the uneven distribution of tasks generated by SIoT devices will also cause the load unbalancing among different satellites. In this paper, hybrid GEO-LEO SIoT networks is investigated with joint computing and communication resource allocation. To tackle the load unbalancing problem, tasks generated by SIoT devices can be processed by collaborative LEO satellites or forwarded to gateways on ground via GEO satellite. Thus, the joint task offloading, communication and computing resources allocation for the hybrid SIoT network can be formulated as a mixed integer dynamic programming problem with satellites-ground cooperation and inter-satellite cooperation via the inter-satellite links. Then, an intelligent task offloading and multi-dimensional resources allocation algorithm (TOMRA) is proposed to minimize the latency of task offloading and processing. Firstly, a method base on deep reinforcement learning is utilized to solve the subproblem of task offloading and channel allocation. And then, convex optimization is adopted to solve the sub-problem of computing resource allocation under fixed offloading and channel allocation decisions. Simulation results show that the proposed TOMRA can achieve better performance than the reference schemes.
Article
Mobile edge computing and cloud computing have emerged as effective technologies to alleviate the increasing computational workload of mobile devices. As a promising enabling 6G technology, the ultra-dense (UD) low earth orbit (LEO) satellite network with low communication latency and high throughput is considered a new bridge for cloud computation offloading. In this paper, we investigate energy-efficient cloud and edge computing in UD-LEO-assisted terrestrial-satellite networks. An optimization problem aiming at minimizing the energy consumption of the computation tasks is formulated. The optimization problem is a mixed-integer non-linear programming problem. To solve this problem, we decompose it into two subproblems, i.e., a joint user association and task scheduling subproblem, and an adaptive computation resource allocation subproblem. For the first subproblem, we model the input of a forward neural network (NN) as the large-scale information (i.e., channel gain and task arrival rates) and obtain the optimal solution by transforming the direct output of the NN. For the second subproblem, we introduce a successive convex approximation method to optimize it iteratively. The simulation results show that our proposed user association and task scheduling strategy outperforms two benchmark algorithms in terms of energy consumption under a strict delay bound and high user density.
Article
Edge-computing-enhanced Internet of Vehicles (EC-IoV) enables ubiquitous data processing and content sharing among vehicles and terrestrial edge computing (TEC) infrastructures (e.g., 5G base stations and roadside units) with little or no human intervention, and plays a key role in the intelligent transportation systems. However, EC-IoV is heavily dependent on the connections and interactions between vehicles and TEC infrastructures, thus will break down in some remote areas where TEC infrastructures are unavailable (e.g., desert, isolated islands, and disaster-stricken areas). Driven by the ubiquitous connections and global-area coverage, space–air–ground-integrated networks (SAGINs) efficiently support seamless coverage and efficient resource management, and represent the next frontier for edge computing. In light of this, we first review the state-of-the-art edge computing research for SAGINs in this article. After discussing several existing orbital and aerial edge computing architectures, we propose a framework of edge computing-enabled SAGINs to support various Internet of Vehicles (EC-IoV) services for the vehicles in remote areas. The main objective of the framework is to minimize the task completion time and satellite resource usage. To this end, a preclassification scheme is presented to reduce the size of action space, and a deep imitation learning-driven offloading and caching algorithm is proposed to achieve real-time decision making. The simulation results show the effectiveness of our proposed scheme. Finally, we also discuss some technology challenges and future directions.
Article
Satellite networks can provide Internet of Things (IoT) devices in remote areas with seamless coverage and downlink multicast transmissions. However, the large transmission latency, serious path loss, as well as the energy and resource constraints of IoT terminals challenge the stringent service requirements for throughput and latency in the 6G era. To address these problems, technologies including space-air-ground integrated networks (SAGINs), machine learning, edge computing, and energy harvesting are highly expected in 6G IoT. In this article, we consider the unmanned aerial vehicles (UAVs) and satellites to offer wireless-powered IoT devices edge computing and cloud computing services, respectively. To accelerate the communications, Terahertz frequency bands are utilized for communications between UAVs and IoT devices. Since the tasks generated by terrestrial IoT devices can be conducted locally, offloaded to the UAV-based edge servers or remote cloud servers through satellites, we focus on the computation offloading problem and consider deep learning techniques to optimize the task success rate considering the energy dynamics and channel conditions. A deep-learning-based offloading policy optimization strategy is given where the long short-term memory model is considered to address the dynamics of energy harvesting performance. Through the theoretical explanation and performance analysis, we discover the importance of emerging technologies including SAGIN, energy harvesting, and artificial intelligence techniques for 6G IoT.
Article
In this paper, we investigate a satellite-aerial integrated edge computing network (SAIECN) to combine a low-earth-orbit (LEO) satellite and aerial high altitude platforms (HAPs) to provide edge computing services for ground user equipment (GUE). In the SAIECN, GUE’s computing tasks can be offloaded to HAP(s) or LEO satellite. In this paper, we minimize the weighted sum energy consumption of SAIECN via joint GUE association, multi-user multiple input and multiple output (MU-MIMO) transmit precoding, computation task assignment, and resource allocation. To solve the nonconvex problem, we decompose the optimization problem into four subproblems and solve each one iteratively. For the GUE association subproblem, quadratic transform based fractional programming (QTFP) and difference of convex function are utilized. The MU-MIMO transmit precoding subproblem is solved via QTFP and the weighted minimum mean-squared method. The computation task assignment is addressed using the classic interior point method while the computation resource allocation is derived in closed form. The numerical results show that the proposed SAIECN and the corresponding algorithm can solve the satellite based edge computing quite well and the energy cost is maintained at a relative low level.
Article
Recently, the ultra-dense low Earth orbit (LEO) satellite constellation over high-frequency band has served as a potential solution for high-capacity backhaul data services. In this paper, we consider an ultra-dense LEO-based terrestrial-satellite network where terrestrial users can access the network through the LEO-assisted backhaul. We aim to minimize the number of satellites in the constellation while satisfying the backhaul requirement of each user terminal (UT). We first derive the average total backhaul capacity of each UT, based on which a three-dimensional constellation optimization algorithm is proposed to minimize the number of satellites in the constellation. Simulation results verify our theoretical capacity analysis and show that for any given coverage ratio requirement, the corresponding optimized LEO satellite constellation can be obtained by the proposed three-dimensional constellation optimization algorithm. Given the same number of deployed LEO satellites, the average coverage ratio of the proposed LEO satellite constellation is at least 10 percentage points higher than that of Telesat constellation.
Article
Low earth orbit (LEO) satellite networks can break through geographical restrictions and achieve global wireless coverage, which is an indispensable choice for future mobile communication systems. In this paper, we present a hybrid cloud and edge computing LEO satellite (CECLS) network with a three-tier computation architecture, which can provide ground users with heterogeneous computation resources and enable ground users to obtain computation services around the world. With the CECLS architecture, we investigate the computation offloading decisions to minimize the sum energy consumption of ground users, while satisfying the constraints in terms of the coverage time and the computation capability of each LEO satellite. The considered problem leads to a discrete and non-convex since the objective function and constraints contain binary variables, which makes it difficult to solve. To address this challenging problem, we convert the original non-convex problem into a linear programming problem by using the binary variables relaxation method. Then, we propose a distributed algorithm by leveraging the alternating direction method of multipliers (ADMM) to approximate the optimal solution with low computational complexity. Simulation results show that the proposed algorithm can effectively reduce the total energy consumption of ground users.
Article
Satellite assisted vehicle-to-vehicle (V2V) communication can provide services for vehicles in depopulated areas, and it can be employed as an effective complementary component for terrestrial vehicular networks. Since the available communication and computing resource for satellites are scarce, task offloading, computing and communication resource allocation, which are coupled with each other, are critical issues for satellite assisted V2V communication. To tackle these problems, we formulate the joint offloading decision, computing and communication resource allocation problem for satellite assisted V2V communication as a mixed-integer nonlinear programming problem with minimum weighted-sum end-to-end latency, and we decouple it into two subproblems. First, the Lagrange multiplier method is adopted to obtain the optimal computing and communication resource allocation with fixed offloading decision. Then, the results of the resource allocation subproblem are fed into the offloading decision problem, which is formulated as a Markov decision process. To maximize the long-term reward of offloading decision, a deep reinforcement learning based method is adopted to learn the optimal offloading decision. Finally, the simulation results show that the proposed joint task offloading and resource allocation approach has superior performance compared with other schemes.
Article
The integration of satellite communications into 5G ecosystem is pivotal to boost enhanced mobile broadband (eMBB) services in highly dynamic scenarios and in areas not optimally supported by terrestrial infrastructures. Given the heterogeneity of the networks involved, network slicing is key networking paradigm to ensure different grades of quality of service (QoS) based on the users' and verticals' requirements. In this light, this paper proposes an optimisation framework able to exploit the available resources allocated to the defined network slices so as to meet the diverse QoS/QoE requirements exposed by the network actors. Resource allocation schemes built upon neural network algorithms are validated through extensive simulation campaigns that have shown the superiority of the proposed concepts with respect to other solution candidates available from the literature.
Article
The application of blockchain to mobile edge computing (MEC) systems has attracted great interests. However, the design and optimization of blockchain and MEC in most existing works are done separately, which will result in sub-optimal performance. In this paper, we propose a joint optimization framework for blockchain-enabled MEC systems to achieve the optimal trade-off between the performance of the MEC system and the performance of the blockchain system. Specifically, both MEC and blockchain are considered as services in the framework, where energy consumption and delay/time to finality (DTF) are the performance metrics for the MEC system and the blockchain system, respectively. We formulate an optimization problem to achieve the optimal trade-off through jointly optimizing user association, data rate allocation, block producer scheduling, and computational resource allocation. To solve the problem, we decouple the optimization variables for efficient algorithm design. In addition, we develop an iterative algorithm for user association and data rate allocation and a bisection algorithm for computing resource allocation. Simulation results show the convergence of the proposed algorithms, and the proposed scheme can achieve the optimal trade-off between energy consumption and DTF.
Article
To support the explosive growth of wireless devices and applications, various access techniques need to be developed for future wireless systems to provide reliable data services in vast areas. With recent significant advances in ultra-dense low Earth orbit (LEO) satellite constellations, satellite access networks (SANs) have shown their significant potential to integrate with 5G and beyond to support ubiquitous global wireless access. In this article, we propose an enabling network architecture for dense LEO-SANs in which the terrestrial and satellite communications are integrated to offer more reliable and flexible access. Through various physical-layer techniques such as effective interference management, diversity techniques, and cognitive radio schemes, the proposed SAN architecture can provide seamless and high-rate wireless links for wireless devices with different quality of service requirements. Three extensive applications and some future research directions in both the physical layer and network layer are then discussed.
Article
ith the development of satellite networks, there is an emerging trend to integrate satellite networks with terrestrial networks, called satellite-terrestrial networks (STNs). The improvements of STNs need innovative information and communication technologies (ICTs), such as networking, caching, and computing. In this paper, we propose a software-defined STN to manage and orchestrate networking, caching, and computing resources jointly. We formulate the joint resources allocation problem as a joint optimization problem, and use a deep Q-learning approach to solve it. Simulation results show the effectiveness of our proposed scheme.ith the development of satellite networks, there is an emerging trend to integrate satellite networks with terrestrial networks, called satellite-terrestrial networks (STNs). The improvements of STNs need innovative information and communication technologies (ICTs), such as networking, caching, and computing. In this paper, we propose a software-defined STN to manage and orchestrate networking, caching, and computing resources jointly. We formulate the joint resources allocation problem as a joint optimization problem, and use a deep Q-learning approach to solve it. Simulation results show the effectiveness of our proposed scheme.W
Article
Computation offloading is a proven successful paradigm for enabling resource-intensive applications on mobile devices. Moreover, in view of emerging mobile collaborative application (MCA), the offloaded tasks can be duplicated when multiple users are in the same proximity. This motivates us to design a collaborative offloading scheme and cache the popular computation results that is likely to be reused by other mobile users. In this paper, we consider the scenario where multiple mobile users offload duplicated computation tasks to the network edge, and share the computation results among them. Our goal is to develop the optimal fine-grained collaborative offloading strategies with caching enhancements to minimize the overall execution delay at the mobile terminal side. To this end, we propose an optimal offloading with caching-enhancement scheme (OOCS) for femto-cloud scenario and mobile edge computing scenario, respectively. Simulation results show that compared to six alternative solutions in literature, our single-user OOCS can reduce execution delay up to 42.83% and 33.28% for single-user femto-cloud and single-user mobile edge computing, respectively. Our multi-user OOCS can further reduce 11.71% delay compared to single-user OOCS through users' cooperation. IEEE
Article
Driven by the growing popularity of mobile applications, mobile cloud computing has been envisioned as a promising approach to enhance computation capability of mobile devices and reduce the energy consumptions. In this paper, we investigate the problem of multi-user computation offloading for mobile cloud computing under dynamic environment, wherein mobile users become active or inactive dynamically, and the wireless channels for mobile users to offload computation vary randomly. As mobile users are self-interested and selfish in offloading computation tasks to the mobile cloud, we formulate the mobile users' offloading decision process under dynamic environment as a stochastic game. We prove that the formulated stochastic game is equivalent to a weighted potential game which has at least one Nash Equilibrium (NE). We quantify the efficiency of the NE, and further propose a multi-agent stochastic learning algorithm to reach the NE with a guaranteed convergence rate (which is also analytically derived). Finally, we conduct simulations to validate the effectiveness of the proposed algorithm and evaluate its performance under dynamic environment.
Article
The 5G Internet of Vehicles has become a new paradigm alongside the growing popularity and variety of computation-intensive applications with high requirements for computational resources and analysis capabilities. Existing network architectures and resource management mechanisms may not sufficiently guarantee satisfactory Quality of Experience and network efficiency, mainly suffering from coverage limitation of Road Side Units, insufficient resources, and unsatisfactory computational capabilities of onboard equipment, frequently changing network topology, and ineffective resource management schemes. To meet the demands of such applications, in this article, we first propose a novel architecture by integrating the satellite network with 5G cloud-enabled Internet of Vehicles to efficiently support seamless coverage and global resource management. A incentive mechanism based joint optimization problem of opportunistic computation offloading under delay and cost constraints is established under the aforementioned framework, in which a vehicular user can either significantly reduce the application completion time by offloading workloads to several nearby vehicles through opportunistic vehicle-to-vehicle channels while effectively controlling the cost or protect its own profit by providing compensated computing service. As the optimization problem is non-convex and NP-hard, simulated annealing based on the Markov Chain Monte Carlo as well as the metropolis algorithm is applied to solve the optimization problem, which can efficaciously obtain both high-quality and cost-effective approximations of global optimal solutions. The effectiveness of the proposed mechanism is corroborated through simulation results.
Article
Mobile-edge computing (MEC) has recently emerged as a prominent technology to liberate mobile devices from computationally intensive workloads, by offloading them to the proximate MEC server. To make offloading effective, the radio and computational resources need to be dynamically managed, to cope with the time-varying computation demands and wireless fading channels. In this paper, we develop an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective as minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint. Specifically, at each time slot, the optimal CPU-cycle frequencies of the mobile devices are obtained in closed forms, and the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method; while for the MEC server, both the optimal frequencies of the CPU cores and the optimal MEC server scheduling decision are derived in closed forms. Besides, a delay-improved mechanism is proposed to reduce the execution delay. Rigorous performance analysis is conducted for the proposed algorithm and its delay-improved version, indicating that the weighted sum power consumption and execution delay obey an \left[O\left(1\slash V\right),O\left(V\right)\right] tradeoff with V as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters.
Article
Mobile-edge computing (MEC) is an emerging paradigm to meet the ever-increasing computation demands from mobile applications. By offloading the computationally intensive workloads to the MEC server, the quality of computation experience, e.g., the execution latency, could be greatly improved. Nevertheless, as the on-device battery capacities are limited, computation would be interrupted when the battery energy runs out. To provide satisfactory computation performance as well as achieving green computing, it is of significant importance to seek renewable energy sources to power mobile devices via energy harvesting (EH) technologies. In this paper, we will investigate a green MEC system with EH devices and develop an effective computation offloading strategy. The execution cost, which addresses both the execution latency and task failure, is adopted as the performance metric. A low-complexity online algorithm, namely, the Lyapunov optimization-based dynamic computation offloading (LODCO) algorithm is proposed, which jointly decides the offloading decision, the CPU-cycle frequencies for mobile execution, and the transmit power for computation offloading. A unique advantage of this algorithm is that the decisions depend only on the instantaneous side information without requiring distribution information of the computation task request, the wireless channel, and EH processes. The implementation of the algorithm only requires to solve a deterministic problem in each time slot, for which the optimal solution can be obtained either in closed form or by bisection search. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Sample simulation results shall be presented to verify the theoretical analysis as well as validate the effectiveness of the proposed algorithm.
Article
This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. Starting from the fundamental theory of black-box optimization, the material progresses towards recent advances in structural optimization and stochastic optimization. Our presentation of black-box optimization, strongly influenced by Nesterov's seminal book and Nemirovski's lecture notes, includes the analysis of cutting plane methods, as well as (accelerated) gradient descent schemes. We also pay special attention to non-Euclidean settings (relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging) and discuss their relevance in machine learning. We provide a gentle introduction to structural optimization with FISTA (to optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror prox (Nemirovski's alternative to Nesterov's smoothing), and a concise description of interior point methods. In stochastic optimization we discuss stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms. We also briefly touch upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.
OneWeb Global Access
  • T Azzarelli
Energy efficient task caching and offloading for mobile edge computing
  • Y Hao
  • M Chen
  • L Hu
  • M S Hossain
  • A Ghoneim
Computation offloading strategy in satellite terrestrial networks with double edge computing
  • Y Wang
  • J Zhang
  • X Zhang
  • P Wang
  • J A Liu