Conference Paper

A Deep Q-learning Approach for Latency Optimization in MEC-augmented Low Earth Orbit Satellite Network

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Driven by the urgent requirement of ubiquitous and reliable coverage for global users, the low earth orbit (LEO) satellite network has attracted numerous attentions from the academic and industry circles. By deploying mobile edge computing (MEC) servers in LEO satellites, computation offloading and content caching services can be provided for remote Internet-of-Things (IoT) devices without proximal servers. In this paper, the joint optimization of computation offloading, radio resource allocation and caching placement in LEO satellite MEC networks are investigated. The problem is formulated to minimize the total delay of all ground IoT devices while ensuring the energy, computing and caching constraints. To solve this mixed-integer and non-convex problem, a Lagrange dual decomposition (LDD)-based algorithm is proposed to obtain the closed-form optimal solution. Then, a heuristic algorithm is proposed to further reduce the computation complexity. Numerical results validate that both two proposed algorithms are effective compared to the optimal exhaustive search, the full local computing and the full MEC methods. Besides, the offloading ratio and the average delay of all IoT devices with different numbers and computing capacities of devices and satellites are also demonstrated.
Article
Full-text available
Popular small satellites host individual sensors or sensor networks in space but require ground stations with directional antennas on rotators to download sensors’ data. Such ground stations can establish a single downlink communication with only one satellite at a time with high vulnerability to system outages when experiencing severe channel impairments or steering engine failures. To contribute to the area of improving the reception quality of small satellites signals, this paper presents a simple receive diversity scheme with proposed processing algorithms to virtually combine satellite downlink streams collected from multiple omnidirectional receivers. These algorithms process multiple received versions of the same signal from multiple geographically separated receiving sites to be combined in one virtual ground station. This virtual ground station helps detect the intended signal more reliably based only on a network of simple and cooperating software-defined radio receivers with omnidirectional antennas. The suggested receive diversity combining techniques can provide significant system performance improvement if compared to the performance of each individual receiving site. In addition, the probability of system outages is decreased even if one or more sites experience severe impairment consequences. Simulation results showed that the bit error rate (BER) of the combined stream is lower than the BER of the best quality receiving site if considered alone. Moreover, virtual ground stations with cooperative omnidirectional reception at geographically separated receivers also allow data to be received from multiple satellites in the same frequency band simultaneously, as software-defined receivers can digitize a wider portion of the frequency band. This can be a significant conceptual advantage as the number of small satellites transmitting data grows, and it is reasonable to avoid the corresponding necessary increase in the number of fully equipped ground stations with rotators.
Article
Full-text available
With the rapid development of computing power and artificial intelligence, IoT devices equipped with ubiquitous sensors are gradually installed with intelligence. People can enjoy many conveniences with intelligent devices, such as face recognition, video understanding, and motion estimation. Currently, deep neural networks are the mainstream technology in intelligent mobile applications. Inspired by DNN model partition schemes, the paradigm of edge computing could be utilized collaboratively to improve the effectiveness of intelligent task execution in IoT devices. However, due to the dynamics of the wireless network environment and the increasing number of IoT devices, a DNN partition policy without adequate consideration would pose a significant challenge to the efficiency of task inference. Moreover, the shortage and high rental cost of edge computing resources make the optimization of DNN-based task execution more difficult. To cope with those situations, we propose a joint method by a self-adaptive DNN partition with cost-effective resource allocation to facilitate collaborative computation between IoT devices and edge servers. Our proposed online algorithm can be proved to ensure the overall rental cost within an upper bound above the optimal solution while guaranteeing the latency for DNN-based task inference. To evaluate the performance of our strategy, we conduct extensive trace-driven illustrative studies and show that the proposed method can achieve sub-optimal results and outperforms other alternative methods.
Article
Full-text available
The sixth generation (6G) wireless communication networks are envisioned to revolutionize customer services and applications via the Internet of Things (IoT) towards a future of fully intelligent and autonomous systems. In this article, we explore the emerging opportunities brought by 6G technologies in IoT networks and applications, by conducting a holistic survey on the convergence of 6G and IoT. We first shed light on some of the most fundamental 6G technologies that are expected to empower future IoT networks, including edge intelligence, reconfigurable intelligent surfaces, space-air-ground-underwater communications, Terahertz communications, massive ultra-reliable and low-latency communications, and blockchain. Particularly, compared to the other related survey papers, we provide an in-depth discussion of the roles of 6G in a wide range of prospective IoT applications via five key domains, namely Healthcare Internet of Things, Vehicular Internet of Things and Autonomous Driving, Unmanned Aerial Vehicles, Satellite Internet of Things, and Industrial Internet of Things. Finally, we highlight interesting research challenges and point out potential directions to spur further research in this promising area.
Article
Full-text available
Low earth orbit mobile satellite system (LEO-MSS) is the major system to provide communication support for mobile terminals beyond the coverage of terrestrial communication systems. However, the quick movement of LEO satellites and current single-layer system architecture impose restrictions on the capability to provide satisfactory service quality, especially for the remote and non-land regions with high traffic requirement. To tackle this problem, high-altitude platforms (HAPs) and terrestrial relays (TRs) are introduced to cover hot-spot regions, and the current single-layer system becomes an LEO-HAP multi-layer access network. Under this setup, we propose a hierarchical resource allocation approach to circumvent the complex management caused by the intricate relationships among different layers. Specifically, to maximize the throughputs, we propose a dynamic multi-beam joint resource optimization method in LEO-ground downlinks based on the predicted movement of LEO satellites. Afterwards, we propose the dynamic resource optimization method of HAP-ground downlinks when LEO satellites and HAPs share the same spectrum. To solve these problems, we use the Lagrange dual method and Karush-Kuhn-Tucker (KKT) conditions to find the optimal solutions. Numerical results show that the proposed architecture outperforms current LEO-MSS in terms of average capacity. In addition, the proposed optimization methods increase the throughputs of LEO-ground downlinks and HAP-ground downlinks with an acceptable complexity.
Article
Full-text available
In this paper, we formulate a long-term resource allocation problem of non-orthogonal multiple access (NOMA) downlink system for satellite-based Internet of Things (S-IoT), to achieve the optimal decoding order and power allocation. This long-term resource allocation problem of satellite NOMA downlink system can be decomposed into two subproblems, i.e., a rate control subproblem and a power allocation subproblem. The latter is a non-convex problem, and the solution of which relies on both queue state and channel state. However, the queue state and channel state continually change from one time slot to another, which makes it extremely strenuous to characterize the optimal decoding order of successive interference cancellation (SIC). Therefore, we explore the weight relationship between queue state and channel state, to derive an optimal decoding order by leveraging deep learning. The proposed deep learning-based long-term power allocation (DL-PA) scheme can efficiently derive a more accurate decoding order than the conventional solution. Simulation results show that the DL-PA scheme can improve the performance of S-IoT NOMA downlink system, in terms of long-term network utility, average arriving rate and queuing delay.
Conference Paper
Full-text available
We consider the design of demand assigned multiple access (DAMA) algorithms that efficiently utilize limited RF uplink resources for packet switched military satellite communication networks. In previous work, we designed DAMA algorithms that optimized link layer efficiency and throughput while controlling delay and jitter. In this work we assess the ability of our DAMA algorithm to meet service level agreements (SLA) between the network management system and the terminals. We evaluate the ability of four DAMA algorithms to provide terminals committed information rates (CIR) under various system loading conditions. The designs have increasing levels of confidence in the accuracy of the predicted demand. Results show that although traffic demand cannot be predicted precisely, current demand provides insight into future demands and that this information can be used to more efficiently provide CIR guarantees to terminals.
Article
Satellite networks, as a supplement to terrestrial networks, can provide effective computing services for Internet of Things (IoT) users in remote areas. Due to the resource limitation of satellites, such as in computing, storage, and energy, a computation task from an IoT user can be divided into several parts and cooperatively accomplished by multiple satellites to improve the overall operational efficiency of satellite networks. Network function virtualization (NFV) is viewed as a new paradigm in allocating network resources on-demand. Satellite edge computing combined with the NFV technology is becoming an emerging topic. In this paper, we propose a potential game approach for virtual network function (VNF) placement in satellite edge computing. The VNF placement problem aims to minimize the deployment cost for each user request, furthermore, we consider that a satellite network should provide computing services for as many user requests as possible. We formulate the VNF placement problem as a potential game to maximize the overall network payoff and analyze the problem by a game-theoretical approach. We implement a decentralized resource allocation algorithm based on a potential game (PGRA) to tackle the VNF placement problem by finding a Nash equilibrium. Finally, we conduct the experiments to evaluate the performance of the proposed PGRA algorithm. The simulation results show that the proposed PGRA algorithm can effectively address the VNF placement problem in satellite edge computing.
Article
Multi-Access Edge Computing (MEC) will allow implementing low-latency services that have been unfeasible so far. The European Telecommunications Standards Institute (ETSI) and the 3rd Generation Partnership Project (3GPP) are working towards the standardization of MEC in 5G networks and the corresponding solutions for routing user traffic to applications in local area networks. Nevertheless, there are neither practical implementations for dynamically relocating applications from the core to a MEC host nor from one MEC host to another ensuring service continuity. In this paper we propose a solution based on Software-Defined Networking (SDN) to create a new instance of the IP anchor point to dynamically redirect User Equipment (UE) traffic to a new physical location (e.g. an edge infrastructure). We also present a novel approach that leverages SDN to replicate the previous context of the connection in the new instance of the IP anchor point, thus guaranteeing Session and Service Continuity (SSC), and compare it with alternative state replication strategies. This approach can be used to implement edge services in 4G or 5G networks.
Article
Mobile Edge Computing (MEC) has emerged as a promising supporting architecture providing a variety of resources to the network edge, thus acting as an enabler for edge intelligence services empowering massive mobile and Internet of Things (IoT) devices with AI capability. With the assistance of edge servers, user equipments (UEs) are able to run deep neural network (DNN) based AI applications, which are generally resource-hungry and compute-intensive, such that an individual UE can hardly afford by itself in real time. However the resources in each individual edge server are typically limited. Therefore, any resource optimization involving edge servers is by nature a resource-constrained optimization problem and needs to be tackled in such realistic context. Motivated by this observation, we investigate the optimization problem of DNN partitioning (an emerging DNN offloading scheme) in a realistic multi-user resource-constrained condition that rarely considered in previous works. Despite the extremely large solution space, we reveal several properties of this specific optimization problem of joint multi-UE DNN partitioning and computational resource allocation. We propose an algorithm called Iterative Alternating Optimization (IAO) that can achieve the optimal solution in polynomial time. In addition, we present rigorous theoretic analysis of our algorithm in terms of time complexity and performance under realistic estimation error. Moreover, we build a prototype that implements our framework and conduct extensive experiments using realistic DNN models, whose results demonstrate its effectiveness and efficiency.
6g internet of things: A comprehensive survey
  • C Dinh
  • Ming Nguyen
  • Ding
  • N Pubudu
  • Aruna Pathirana
  • Jun Seneviratne
  • Dusit Li
  • Niyato