Article

LEO Satellite and UAVs Assisted Mobile Edge Computing for Tactical Ad-Hoc Network: A Game Theory Approach

Authors:
  • UESTC&PLAUST&SEU
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

As an emerging technology, mobile edge computing (MEC) network paradigm provides great computing potential for edge services, which has been widely applied in friendly city environment. However, there are still many challenges to deploy MEC technology in harsh tactical communication environment due to poor communication conditions, limited computational resources and hostile malicious interference. Thus, this paper investigates the computational resource pricing and task offloading strategy in tactical MEC Ad-hoc network, which consists of multiple tactical edge nodes, ground MEC servers, unmanned aerial vehicle-MEC (UAV-MEC) servers and a low earth orbit-MEC (LEO-MEC) satellite server. Each edge node can offload its partial computation-intensive task to the MEC servers to reduce computational delay and energy consumption. First, a multi-leader and multi-follower Stackelberg game (MLMF-SG) which includes leader subgame for MEC servers and follower subgame for edge nodes, is proposed to formulate the interaction between servers and edge nodes. It has been proved that there exists a Stackelberg equilibrium (SE) in the proposed MLMFSG. In order to decrease the delay, energy consumption and resource overhead, the follower subgame is further formulated as a multi-mode computation task offloading game. With the help of the exact potential game (EPG), we prove that the follower subgame can converge to the Nash equilibrium (NE). To achieve the SE, a hierarchical distributed iterative algorithm is designed to maximize the utilities of the leaders and followers. Finally, the simulation results demonstrate that the proposed scheme can achieve better performance compared with the existing schemes.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The power range of the entire functional module of the business satellite for sending intersatellite signals is 10 W-50 W [10], the power range of the entire functional module for receiving signals is 5 W-25 W [10], and the power range of the entire functional module for computing functions is 60 W-415 W [11]. The clock frequency of the satellite CPU is 2 × 10 10 cycle/s [12]. The number of businesses ranges from 1000 to 9000, and these businesses are randomly distributed across different satellites [13]. ...
Article
Full-text available
This paper proposes a green computing strategy for low Earth orbit (LEO) satellite networks (LSNs), addressing energy efficiency and delay optimization in dynamic and energy-constrained environments. By integrating a Markov Decision Process (MDP) with a Double Deep Q-Network (Double DQN) and introducing the Energy–Delay Ratio (EDR) metric, this study effectively quantifies and balances energy savings with delay costs. Simulations demonstrate significant energy savings, with reductions of up to 47.87% under low business volumes, accompanied by a minimal delay increase of only 0.0161 s. For medium business volumes, energy savings reach 26.75%, with a delay increase of 0.0189 s, while high business volumes achieve a 4.36% energy reduction and a delay increase of 0.0299 s. These results highlight the proposed strategy’s ability to effectively balance energy efficiency and delay, showcasing its adaptability and suitability for sustainable operations in LEO satellite networks under varying traffic loads.
... A multi-leader and multi-follower Stackelberg game (MLMF-SG) was developed to model the negotiation between servers and edge nodes in a self-organizing network of MECs. They devised a hierarchical distributed iterative algorithm to maximize the utility of both the leaders and the followers [28]. ...
Article
Full-text available
Due to the rapid development of the Internet of Vehicles (IoV), the combination of IoV and edge computing, known as vehicle edge computing (VEC), has received considerable attention from both academia and industry. However, task offloading in diverse intersection scenarios remains suffering from inefficiency of resource allocation and low quality of service for task execution due to the imbalance of traffic flow and the rigid requirement of latency. To address these issues, we develop a task offloading strategy by a fuzzy decision-making algorithm to handle uncertainty and imprecision. This task offloading strategy comprises two components: (1) A VEC resource pool with available vehicles at each intersection is constructed when taking the rotating direction of the recognition region. Then, we introduce a fuzzy decision-making algorithm to select a set of high-quality service vehicles from this VEC resource pool as an auxiliary edge server (AS). (2) We employ an edge service provider (ESP) to manage the computational resources of a main edge server (MS) and an AS deployed at a traffic intersection. The negotiation between the ESP and the task vehicles is modeled as a Stackelberg game. We prove the existence of the unique perfect Nash equilibrium, and a genetic algorithm is applied to find the optimum. Finally, we conduct simulation experiments with datasets collected in real-world scenarios. The results demonstrate that our scheme decreases task execution time by 9.73% compared to the cloud server scheme and reduces energy consumption by 13.78% compared to the state-of-the-art reinforcement learning (RL) strategy.
... A multi-agent reinforcement learning based task offloading algorithm was proposed to solve the problem. [46] considered a similar network architecture but also included the peer-to-peer (P2P) communication between ground users, allowing computation tasks to be offloaded to peer users for execution. The authors aimed to simultaneously improve the task latency, energy consumption and resource costs by optimizing users' offloading decisions. ...
Article
Full-text available
The sixth-generation (6G) network is envisioned to shift its focus from the service requirements of human beings to those of Internet-of-Things (IoT) devices. Satellite communications are indispensable in 6G to support IoT devices operating in rural or disaster areas. However, satellite networks face the inherent challenges of low data rate and large latency, which may not support computation-intensive and delay-sensitive IoT applications. Mobile Edge Computing (MEC) is a burgeoning paradigm by extending cloud computing capabilities to the network edge. Using MEC technologies, the resource-limited IoT devices can access abundant computation resources with low latency, which enables the highly demanding applications while meeting strict delay requirements. Therefore, an integration of satellite communications and MEC technologies is necessary to better enable 6G IoT. In this survey, we provide a holistic overview of satellite-MEC integration. We first categorize the related studies based on three minimal structures and summarize current advances. For each minimal structure, we discuss the lessons learned and possible future directions. We also summarize studies considering the combination of minimal structures. Finally, we outline potential research issues to envision a more intelligent, more secure, and greener integrated satellite-MEC network.
Article
As a supplement to terrestrial communication networks, satellite edge computing can break through geographical limitations and provide on-orbit computing services for people in some remote areas to achieve truly seamless global coverage. Considering time-varying channels, queue delays, and dynamic loads of edge computing satellites, we propose a multi-agent task offloading and resource allocation (MATORA) algorithm with weighted latency as the optimization goal. It is a mixed integer nonlinear problem decoupled into task offloading and resource allocation sub-problems. For the offloading sub-problem, we propose a distributed multi-agent deep reinforcement learning algorithm, and each agent generates its own offloading decision without knowing the prior knowledge of others. We show that the resource allocation problem is convex and can be solved using convex optimization methods. The experiment shows that the proposed algorithm can better adapt to the change of channel and the dynamic load of edge computing satellite, and it can effectively reduce task latency and task drop rate.
Article
Through deploying satellites and unmanned aerial vehicles (UAVs) with onboard processing capability, the space-air-ground edge computing network (SAGECN) is poised to support ubiquitous access and computation offloading for Internet of Things (IoT) terminals deployed in remote areas. However, the current SAGECN faces several challenges in realizing its full potential, such as scarce spectrum resources, diverse computational demands, and dynamic network circumstances. To meet these challenges, we propose a cluster-non-orthogonal multiple access (C-NOMA)-enabled SAGECN model, where a satellite and multiple UAVs act as collaborative edge servers to execute tasks from IoT terminals. Since each offloaded task should be processed via a specific program, the edge servers carry out program caching, whilst transfer the tasks that do not match the cached programs to another server in a multi-hop manner. Considering the delay-sensitive requirements of computation tasks, we formulate a joint task offloading, communication-computation-cache resource assignment, and routing plan problem, aimed at minimizing the average system latency. To cope with this challenging issue, we partition it into three subproblems. First, a multi-agent learning-based approach is developed to collaboratively train the task offloading, flight trajectory, and program caching. As a step further, two optimization subroutines are embedded to perform routing plan, subchannel allocation, and power control, thereby rendering the overall solution. Experimental results reveal that our approach achieves outstanding performance in terms of system delay and spectrum efficiency.
Article
Recently, the hierarchical coalition-based Ad-hoc network (HCAN) has been adopted to establish link connection in adversarial environments. In this paper, a game-theoretic-based link access (GTLA) approach is proposed to maximize the sum rate of HCAN. Our scheme consists of two phases: the assignment between coalition head (CH) and coalition member (CM) as well as the matching between CH and disconnected node (DN). Firstly, the CH-CM assignment problem is formulated as a coalition formation game (CFG). It's proved that the CFG with the two-side best order is an exact potential game (EPG), which can significantly mitigate co-channel interference and hostile jamming. Secondly, the CH-DN matching problem is formulated as a bilateral many-to-one matching game. Then a partial mutual benefit order (PMBO) based matching algorithm with low computational complexity is designed to reconstruct broken links. Finally, simulation results are provided to verify the superior performance of our scheme.
Article
Edge computing is an efficient way to offload computational tasks for user equipment (UE) which has computation-intensive and latency-sensitive tasks in certain applications. However, UEs can not offload to ground edge servers when they are in remote areas. Mounting edge servers on low earth orbit (LEO) satellites can provide remote UEs with task offloading when the ground infrastructure is not available. In this paper, we introduce a multi-satellite-enabled edge computing system for offloading UEs’ computational tasks with the aim of minimizing system energy consumption by optimizing users’ association, power control, task scheduling, and computing resource allocation. Specifically, a UE’s partial task is executed locally and the rest of its task is offloaded to a satellite for processing. Such energy minimization problem is formulated as a mixed-integer nonlinear programming (MINLP) optimization problem. By decomposing the original problem into four sub-problems, we solve each sub-problem with convex optimization methods. In addition, an iterative algorithm is proposed to jointly optimize the task offloading and resource allocation strategy, which achieves a near optimal solution through several iterations. Finally, the complexity and convergence of the algorithm are verified. In our simulation results, the proposed algorithm is compared with different task offloading and resource allocation schemes in terms of system energy consumption, where 43% energy is saved.
Article
With the rapid development of large low earth orbit (LEO) satellite constellations, satellite edge computing is an emerging topic to provide computing services for Internet of Things (IoT) users, which are not in the coverage of terrestrial networks. For computation offloading in satellite edge computing, it is still challenging to allocate the network resources on-demand for IoT users to improve service experience while reducing energy consumption, since user tasks may be offloaded between different satellites by inter-satellite links (ISLs). In this paper, we study the joint optimization problem of computation offloading and resource allocation in cooperative satellite edge computing. Then, a hierarchical dynamic resource allocation (HDRA) algorithm for computation offloading is proposed by introducing breadth first search (BFS) and greedy to tackle the problem, the aim is to minimize service delay and energy consumption jointly. We conduct the experiments to evaluate the performance of the proposed HDRA algorithm, compared with two baselines of BFS-PSO and Gurobi. Experimental results show that the proposed HDRA algorithm can address the formulated problem effectively in satellite edge computing and obtain the results of computation offloading and resource allocation in a low running time.
Article
Full-text available
Low earth orbit (LEO) satellite network is an important development trend for future mobile communication systems, which can truly realize the 'ubiquitous connection' of the whole world. In this paper, we present a cooperative computation offloading in the LEO satellite network with a three-tier computation architecture by leveraging the vertical cooperation among ground users, LEO satellites, and the cloud server, and the horizontal cooperation between LEO satellites. To improve the quality of service for ground users, we optimize the computation offloading decisions to minimize the total execution delay for ground users subject to the limited battery capacity of ground users and the computation capability of each LEO satellite. However, the formulated problem is a large-scale nonlinear integer programming problem as the number of ground users and LEO satellites increases, which is difficult to solve with general optimization algorithms. To address this challenging problem, we propose a distributed deep learning-based cooperative computation offloading (DDLCCO) algorithm, where multiple parallel deep neural networks (DNNs) are adopted to learn the computation offloading strategy dynamically. Simulation results show that the proposed algorithm can achieve near-optimal performance with low computational complexity compared with other computation offloading strategies.
Article
Full-text available
This paper utilizes a reconfigurable intelligent surface (RIS) to enhance the anti-jamming performance of wireless communications, due to its powerful capability of constructing smart and reconfigurable radio environment. In order to establish the practical interactions between the base station (BS) and the jammer, a Bayesian Stackelberg game is formulated, where the BS is the leader and the jammer acts as the follower. Specifically, with the help of a RIS-assisted transmitter, the BS attempts to reliably convey information to users with maximum utilities, whereas the smart jammer tries to interfere the signal reception of users with desired energy efficiency (EE) threshold. Since the BS and the jammer are not cooperative parties, the practical assumption that neither side can obtain the other's strategies is adopted in the proposed game, and the angular information based imperfect channel state information (CSI) is also considered. After tackling the practical assumption by using Cauchy-Schwarz inequality and the imperfect angular information by using the discretization method, the closed-form solution of both sides can be obtained via the duality optimization theory, which constitutes the unique Stackelberg equilibrium (SE). Numerical results demonstrate the superiority and validity of our proposed robust schemes over the existing approaches.
Article
Full-text available
With the explosive increment of computation requirements, the multi-access edge computing (MEC) paradigm appears as an effective mechanism. Besides, as for the Internet of Things (IoT) in disasters or remote areas requiring MEC services, unmanned aerial vehicles (UAVs) and high altitude platforms (HAPs) are available to provide aerial computing services for these IoT devices. In this paper, we develop the hierarchical aerial computing framework composed of HAPs and UAVs, to provide MEC services for various IoT applications. In particular, the problem is formulated to maximize the total IoT data computed by the aerial MEC platforms, restricted by the delay requirement of IoT and multiple resource constraints of UAVs and HAPs, which is an integer programming problem and intractable to solve. Due to the prohibitive complexity of exhaustive search, we handle the problem by presenting the matching game theory based algorithm to deal with the offloading decisions from IoT devices to UAVs, as well as a heuristic algorithm for the offloading decisions between UAVs and HAPs. The external effect affected by interplay of different IoT devices in the matching is tackled by the externality elimination mechanism. Besides, an adjustment algorithm is also proposed to make the best of aerial resources. The complexity of proposed algorithms is analyzed and extensive simulation results verify the efficiency of the proposed algorithms, and the system performances are also analyzed by the numerical results.
Article
Full-text available
As the sixth generation (6G) network is under research, and one important issue is the aerial access network and terrestrial-space integration. The Internet of remote things (IoRT) sensors can access the unmanned aerial vehicles (UAVs) in the air, and low earth orbit (LEO) satellite networks in the space help to provide lower transmission delay for delay-sensitive IoRT data. Therefore, in this paper, we consider the LEO satellite assisted UAV data collection for the IoRT sensors. Specifically, a UAV collects the data from IoRT sensors, then two transmission modes for the collected data back to earth: delay-tolerant data leveraging the carry-store mode of UAVs to earth, and the delay-sensitive data utilizing the UAV-satellite network transmission to earth. Considering the limited payloads of UAVs, we focus on minimizing the total energy cost (trajectory and transmission) of UAVs while satisfying the IoRT demands. Due to the intractability of direct solution, we deal with the problem using the Dantzig-Wolfe decomposition and design the column generation based algorithms to efficiently solve the problem. Moreover, we present a heuristic algorithm for the subproblem to further reduce the complexity of large-scale networks. Finally, numerical results verify the efficiency of proposed algorithms and the advantage of LEO satellite-assisted UAV trajectory design combined with data transmission is also analyzed.
Article
Full-text available
Given the limited transmission power of smart devices in Internet of remote things (IoRT), unmanned aerial vehicle (UAV) aided space-air-ground (SAG) networks become a beneficial remedy for uplink data transmission in IoRT networks. In this paper, we propose a SAG-IoRT framework, where drones act as relays to upload the data from smart devices to low earth orbit (LEO) satellites. Considering the large number of smart devices, we maximize the system capacity by jointly optimizing smart devices connection scheduling, power control and UAV trajectory, where the joint optimization is a non-convex optimization problem. The formulated problem is a mixed integer non-convex optimization problem, which is challenging to solve directly. Hence, an efficient iterative algorithm is proposed for solving the above-mentioned non-convex optimization problem by applying variable substitution, successive convex optimization (SCA) techniques and the block coordinate decent (BCD) algorithm. In particular, we alternately iterate smart device connection scheduling, power control and UAV trajectory design to obtain the maximum system capacity. Numerical simulation results show that our proposed algorithm substantially improves the system capacity in comparison to the static UAV scheme, and achieve a gain of at least 22.3% in terms of the capacity against to dynamic UAV scheme with a circular trajectory. Index Terms-Internet of remote things (IoRT), space-air-ground (SAG) networks, unmanned aerial vehicle (UAV), UAV trajectory, system capacity.
Article
Full-text available
Space-air-ground networks play important roles in both fifth generation (5G) and sixth generation (6G) techniques. Low earth orbit (LEO) satellites and high altitude platforms (HAPs) are key components in space-air-ground networks to provide access services for the massive mobile and Internet of Things (IoT) users, especially in remote areas short of ground base station coverage. LEO satellite networks provide global coverage, while HAPs provide terrestrial users with closer, stable massive access service. In this work, we consider the cooperation of LEO satellites and HAPs for the massive access and data backhaul of remote area users. The problem is formulated to maximize the revenue in LEO satellites, which is in the form of mixed integer nonlinear programming. Since finding the optimal solution by exhaustive search is extremely complicated with a large scale of network, we propose a satellite-oriented restricted three-sided matching algorithm to deal with the matching among users, HAPs, and satellites. Furthermore, to tackle the dynamic connections between satellites and HAPs caused by the periodic motion of satellites, we present a two-tier matching algorithm, composed of the Gale-Shapley-based matching algorithm between users and HAPs, and the random path to pairwise-stable matching algorithm between HAPs and satellites. Numerical results show the effectiveness of the proposed algorithms.
Article
Full-text available
In this paper, we investigate the problem of opportunistic UAV transmission in D2D communication networks. UAVs are supposed to help transmissions of D2D users when they are employed to perform flying missions with given trajectories. On one hand, users can select appropriate UAVs as real-time relays according to the topology in the sky at different moments. On the other hand, due to flight characteristics, UAVs can receive the uploading data when they are approaching transmitters, and then offload the data to corresponding receivers in the appropriate later time. Users need to select and adjust transmission modes dynamically, including multi-UAV selection, time allocation of data loading and offloading, as well as the competition of channel access. We design a hierarchical game model to analyze the complicated relationship among devices. Specifically, a predictable dynamic matching market is constructed to address the issue of UAV selection and time allocation, while the problem of channel access is studied by the congestion game. After that, distributed algorithms are proposed and the properties of convergence are discussed. Simulation results confirm that the effective opportunistic UAV transmission approach can improve the global network significantly, while unreasonable optimization approaches may lead to the decline of the transmission performance.
Article
Full-text available
In this letter, we propose a novel offloading learning approach to compromise energy consumption and latency in a multi-tier network with mobile edge computing. In order to solve this integer programming problem, instead of using conventional optimization tools, we apply a cross entropy approach with iterative learning of the probability of elite solution samples. Compared to existing methods, the proposed one in this network permits a parallel computing architecture and is verified to be computationally very efficient. Specifically, it achieves performance close to the optimal and performs well with different choices of the values of hyperparameters in the proposed learning approach.
Article
Full-text available
In this paper, we employ the unmanned aerial vehicle (UAV) as a flying base station (BS) to process the application tasks migrated from the terminal devices (TDs) for saving TDs’ energy consumption. To tackle the huge volume of data at TDs, we propose a resource partitioning strategy scheme where one portion of bits at TD is computed locally and the remaining portion is transmitted to UAV for computing. Our goal is to minimize the total energy consumption of TDs by jointly optimizing bit allocation, resource partitioning, power allocation at TDs/UAV, TD-UAV scheduling, and UAV trajectory with One-by-One access scheme. Due to the non-convexity of the original problem with mixed integer variables, we decompose the problem into two sub-problems. Specifically, in the first sub-problem, the TD-UAV scheduling is obtained by solving dual problem with given UAV trajectory. In the second sub-problem, the UAV trajectory is obtained by using successive convex optimization techniques with given TD-UAV scheduling. Then, an iterative algorithm is proposed to optimize the TD-UAV scheduling and UAV trajectory alternately. Numerical results are provided to demonstrate the superiority of our proposed scheme over the benchmarks.
Article
Full-text available
The use of flying platforms such as unmanned aerial vehicles (UAVs), popularly known as drones, is rapidly growing in a wide range of wireless networking applications. In particular, with their inherent attributes such as mobility, flexibility, and adaptive altitude, UAVs admit several key potential applications in wireless systems. On the one hand, UAVs can be used as aerial base stations to enhance coverage, capacity, reliability, and energy efficiency of wireless networks. For instance, UAVs can be deployed to complement existing cellular systems by providing additional capacity to hotspot areas as well as to provide network coverage in emergency and public safety situations. On the other hand, UAVs can operate as flying mobile terminals within the cellular networks. In this paper, a comprehensive tutorial on the potential benefits and applications of UAVs in wireless communications is presented. Moreover, the important challenges and the fundamental tradeoffs in UAV-enabled wireless networks are thoroughly investigated. In particular, the key UAV challenges such as three-dimensional deployment, performance analysis, air-to-ground channel modeling, and energy efficiency are explored along with representative results. Then, fundamental open problems and potential research directions pertaining to wireless communications and networking with UAVs are introduced. To cope with the open research problems, various analytical frameworks and mathematical tools such as optimization theory, machine learning, stochastic geometry, transport theory, and game theory are described. The use of such tools for addressing unique UAV problems is also presented. In a nutshell, this tutorial provides key guidelines on how to analyze, optimize, and design UAV-based wireless communication systems.
Article
Full-text available
In this paper, a game-theoretic framework is proposed for coordinating resource partitioning and data offloading in LTEbased HetNets. The goal of this framework is to determine the amount of radio resources a macrocell should offer to neighboring small cells (SCs) and the amount of traffic each SC should admit from the macrocell. A two-stage Stackelberg game is applied to optimize the strategies of both the macrocell (the leader) and SCs (the followers). The macrocell’s strategy is shown to be a mixedboolean nonlinear program, which is NP-hard. To solve this problem efficiently, a branch and bound based method is proposed to obtain the global optimal. We also show that this two-stage game has a unique Stackelberg equilibrium. Numerical results show that the proposed framework outperforms the traditional design by 50% in term of offloaded data. Additionally, reduction of 14% was observed in term of cost paid by MBS.
Article
We study the data offloading problem in space-air-ground integrated networks (SAGINs) by jointly optimizing task scheduling and power control to balance the total energy consumption and mean makespan. We consider a mixed integer nonlinear programming problem to minimize a normalized weighted combination of these two conflicting objectives. We first propose an approximation algorithm to find a high-quality solution, which is shown to be at most 12\frac{1}{2} from the optimum to this problem for given power allocation. We further show that optimal power allocation can be obtained in closed form under the assumption that satellite-ground links have low signal-to-noise ratio (SNR). Thus, the proposed approximation algorithm can be directly utilized to obtain a constant-factor solution to the studied problem in low-SNR scenarios. To extend our solution to more general scenarios, we further propose an efficient hybird algorithm based on a genetic framework. Our simulation results demonstrate the near-optimality and correctness of the proposed algorithms, and they unveil the interplay between total energy consumption and mean makespan in SAGINs as well.
Article
Mobile edge computing can effectively reduce service latency and improve service quality by offloading computation-intensive tasks to the edges of wireless networks. Due to the characteristic of flexible deployment, wide coverage and reliable wireless communication, unmanned aerial vehicles (UAVs) have been employed as assisted edge clouds (ECs) for large-scale sparely-distributed user equipment. Considering the limited computation and energy capacities of UAVs, a collaborative mobile edge computing system with multiple UAVs and multiple ECs is investigated in this paper. The task offloading issue is addressed to minimize the sum of execution delays and energy consumptions by jointly designing the trajectories, computation task allocation, and communication resource management of UAVs. Moreover, to solve the above non-convex optimization problem, a Markov decision process is formulated for the multi-UAV assisted mobile edge computing system. To obtain the joint strategy of trajectory design, task allocation, and power management, a cooperative multi-agent deep reinforcement learning framework is investigated. Considering the high-dimensional continuous action space, the twin delayed deep deterministic policy gradient algorithm is exploited. The evaluation results demonstrate that our multi-UAV multi-EC task offloading method can achieve better performance compared with the other optimization approaches.
Article
The satellite-terrestrial cooperative network is considered an emerging network architecture, which can adapt to various services and applications in the future communication network. In recent years, the combination of satellite communication and Mobile Edge Computing (MEC) has become an emerging research hotspot. Satellite edge computing can provide users with full coverage on-orbit computing services by deploying MEC servers on satellites. This paper studies the task offloading of multi-user and multi-edge computing satellites and proposes a novel algorithm that joint task offloading and communication computing resource optimization (JTO-CCRO). The JTO-CCRO is decoupled into task offloading and resource allocation sub-problems. After the mutual iteration of the two sub-problems, the system utility function can be further reduced. For the task offloading sub-problem, it is first confirmed that the offloading problem is a game problem. The offloading strategy can be obtained from the Nash equilibrium solution. We confirm resource optimization sub-problem is a convex optimization problem that can be solved by the Lagrange multiplier method. Simulation shows that the JTO-CCRO algorithm can converge quickly and effectively reduce the system utility function.
Article
Mega-LEO satellite constellations are becoming a concrete reality. Companies such as SpaceX, Virgin Orbit, and OneWeb have already started launching hundreds of LEO satellites and are turning their services on. Even if the aim of such LEO satellite constellations is just, for now, to offer worldwide Internet access equality, their deployment proves their feasibility and suggests usefulness for further purposes. In this article, we shed some light on the possible integration of the in-network computing paradigm in mega-LEO satellite constellations. Terrestrial and/or non-terrestrial nodes can benefit from offloading the computing to an orbital edge (OE) platform reachable through the satellite constellation, exploiting its fast and distributed computational capability. In this context, a preliminary analysis highlights that task offloading strategies can lead to performance improvements that open up novel challenges in the design and setup of OE platforms.
Article
Recently, the development of unmanned aerial vehicle (UAV) mobile-edge computing (MEC) networks has brought unprecedented gains and opportunities. In this article, the joint computation offloading, UAV role, and location selection problem in hierarchical multicoalition UAV MEC network is investigated. To capture the hierarchical feature and discrete optimization, the discrete Stackelberg game with multiple leaders and followers is formulated. We prove that both the leader-level and member-level subgames are ordinal potential games (OPGs) with Nash equilibrium (NE). Thus, the Stackelberg equilibrium (SE) is guaranteed. To achieve the SE, the log-linear-based hierarchical learning algorithm (LHLA) is proposed and analyzed. The simulation results show that the LHLA can converge fast and achieve better performance compared with the existing schemes.
Article
Due to the flexible mobility and agility, unmanned aerial vehicles (UAVs) are expected to be deployed as aerial base stations (BSs) in future air-ground integrated wireless networks, providing temporary and controllable coverage and additional computation capabilities for ground Internet of Things (IoT) devices with or without infrastructure support. Meanwhile, with the breakthrough of artificial intelligence (AI), more and more AI applications relying on AI methods such as deep neural networks (DNNs) are expected to be applied in various fields such as smart homes, smart factories and smart cities to improve our lifestyles and efficiency dramatically. However, AI applications are generally computation-intensive, latency-sensitive, and energy-consuming, making resource-constrained IoT devices unable to benefit from AI anytime and anywhere. In this paper, we study mobile edge computing (MEC) for AI applications in air-ground integrated wireless networks. Our goal is to minimize the service latency while ensuring the learning accuracy requirements and energy consumption. To achieve that, we take DNN as the typical AI application and formulate an optimization problem that optimizes the DNN model decision, computation and communication resource allocation, and UAV trajectory control, subject to the energy consumption, latency, computation and communication resource constraints. Considering the formulated problem is non-convex, we decompose it into multiple convex subproblems and then alternately solve them till they converge to the desired solution. Simulation results show that the proposed algorithm significantly improves the system performance for AI applications.
Article
Non-orthogonal multiple access (NOMA) in satellite communication (SATCOM) system can bring high spectral efficiency and massive connectivity. In this paper, we investigate files delivery and share optimization in LEO satellite-terrestrial integrated networks (STINs). A NOMA based coalition formation game (CFG) approach is proposed for minimizing total cost, in which satellite transfer files to head users in each group via NOMA, and head users utilize device to device (D2D) communications to share files among users. Firstly, head users selection algorithm is proposed to choose each group's head users receiving files from satellite. Precoding vectors and transmit power optimization with imperfect channel state information (CSI) are derived to achieve successful NOMA transmission and files download cost minimization in each group. A graph theory based algorithm is proposed to find the optimal D2D links and minimize files share cost. Then, we formulate a CFG and propose a preference order (Group-Best order) for user grouping. Furthermore, we have proved that the CFG with Group-Best order is an exact potential game (EPG), which can get stable group partition and achieve global optimization, i.e., minimizing total cost. Finally, a best response algorithm for NOMA based CFG (NCFG) is proposed to find stable group partition. Simulation results verify that our proposed approach is better than other approaches.
Article
Vehicular Edge Computing (VEC) is a promising paradigm that leverages the vehicles to offload computation tasks to the nearby VEC server with the aim of supporting the low latency vehicular application scenarios. Incentivizing VEC servers to participate in computation offloading activities and make full use of computation resources is of great importance to the success of intelligent transportation services. In this paper, we formulate the competitive interactions between the VEC servers and vehicles as a two-stage Stackelberg game with the VEC servers as the leader players and the vehicles as the followers. After obtaining the full information of vehicles, the VEC server calculates the unit price of computation resource. Given the unit prices announced by VEC server, the vehicles determine the amount of computation resource to purchase from VEC server. In the scenario that vehicles do not want to share their computation demands, a deep reinforcement learning based resource management scheme is proposed to maximize the profits of vehicles and VEC server. The extensive experimental results have demonstrated the effectiveness of our proposed resource management scheme based on Stackelberg game and deep reinforcement learning.
Article
The internet of satellites (IoS), containing multiple low earth orbit (LEO) satellite constellations, can support tremendous traffic, massive connectivity, and vast coverage. But it also puts forward higher demands for resource management due to the high dynamics of the IoS networks, especially in the malicious jamming environment, where the jammers launch jamming attacks to reduce the efficiency and reliability. Thus, this paper investigates the problem of resource management in malicious jamming environment for IoS, which is divided into three sub-problems: traffic prediction problem, anti-jamming decision problem and resource matching problem. To solve these problems, we proposed a distributed resource management framework (DRMF), which consists of three sub-algorithms. Firstly, the traffic prediction algorithm (TPA) is proposed to deeply mine and accurately predict the traffic rule. Meanwhile, the dynamic anti-jamming algorithm (DAA) is developed to make anti-jamming decision autonomously. Then, based on the outputs obtained by TPA and DAA, the distributed resource matching algorithm (DRMA) is proposed for IoS, and the satellites with insufficient resource can apply for assistance from neighboring satellites with excess resource, thereby improving the safety and efficiency of the entire IoS network. Finally, experiment results and algorithm analysis verify the proposed scheme has better performance than the existing algorithms.
Article
The emergence of intelligent applications produces the demand for computing. How to reduce the computation pressure in mobile edge computing (MEC) under massive computation demand is an urgent problem to solve. Specifically, the allocation of heterogeneous resources including communication resources and computing resources needs to be optimized simultaneously. From the perspective of joint optimization of channel allocation, device-to-device (D2D) pairing, and offloading mode, this paper studies the multi-user computing task offloading problem in device-enhanced MEC. The objective is maximizing the aggregate offloading benefits, i.e., the tradeoff between delay and energy consumption, of all compute-intensive users in the network. By introducing game theory, the problem is modeled as a multi-user computation task offloading game, which is proved to be an exact potential game (EPG) with at least one pure-strategy Nash equilibrium (NE) solution. In order to find a desirable solution, this paper proposes a better reply based distributed multi-user computation task offloading algorithm (BR-DMCTO). Simulation results show that the proposed offloading mechanism can improve the benefit of users, and verify the effectiveness and convergence of the proposed algorithm.
Article
In this letter, we investigate the joint optimization of computation offloading, channel allocation and position deployment problem for coalition-based unmanned aerial vehicle (UAV) mobile edge computing networks. To characterize the hierarchical offloading feature in coalition-based UAV network, we propose the hierarchical framework and formulate a discrete multi-leader multi-follower energy minimization Stackelberg game. In addition, we prove that the multi-leader subgame and multi-follower subgame are exact potential games with Nash equilibrium. Then the existence of Stackelberg equilibrium (SE) is guaranteed. The spatial adaptive play-best response based hierarchical iterative learning (SAP-BRHIL) algorithm is proposed to achieve the SE. The simulation results show that the proposed SAP-BRHIL algorithm can achieve the lowest network energy consumption compared with some existing approaches.
Article
With the advance of unmanned aerial vehicles (UAVs) and low earth orbit (LEO) satellites, the integration of space, air and ground networks has become a potential solution to the beyond fifth generation (B5G) Internet of remote things (IoRT) networks. However, due to the network heterogeneity and the high mobility of UAVs and LEOs, how to design an efficient UAV-LEO integrated data collection scheme without infrastructure support is very challenging. In this paper, we investigate the resource allocation problem for a two-hop uplink UAV-LEO integrated data collection for the B5G IoRT networks, where numerous UAVs gather data from IoT devices and transmit the IoT data to LEO satellites. In order to maximize the data gathering efficiency in the IoT-UAV data gathering process, we study the bandwidth allocation of IoT devices and the 3-dimensional (3D) trajectory design of UAVs. In the UAV-LEO data transmission process, we jointly optimize the transmit powers of UAVs and the selections of LEO satellites for the total uploaded data amount and the energy consumption of UAVs. Considering the relay role and the cache capacity limitations of UAVs, we merge the optimizations of IoT-UAV data gathering and UAV-LEO data transmission into an integrated optimization problem, which is solved with the aid of the successive convex approximation (SCA) and the block coordinate descent (BCD) techniques. Simulation results demonstrate that the proposed scheme achieves better performance than the benchmark algorithms in terms of both energy consumption and total upload data amount.
Article
In this paper, we propose a novel optimization framework for a secure and green mobile edge computing (MEC) network, through a deep reinforcement learning approach, where the secure data transmission is threatened by the unmanned aerial vehicle (UAV). To alleviate the local burden on the computation, some computational tasks can be offloaded to the computational access points (CAPs), at the cost of price, transmission latency and energy consumption. By jointly reducing the price, latency and energy consumption, we propose a novel optimization framework for the secure MEC network, based on the deep reinforcement learning. Specifically, we firstly employ several optimization criteria, where criterion I minimizes the linear combination of price, latency and energy consumption, criterion II minimizes the price with the constrained latency and energy consumption, criterion III minimizes the latency with the constrained price and energy consumption, while criterion IV minimizes the energy consumption with the constrained price and latency. For each criterion, we then propose an optimization framework which can dynamically adjust the task offloading ratio and bandwidth allocation ratio simultaneously, where a novel feature extraction network is proposed to improve the training effect. Simulation results are finally demonstrated to verify the effective of the proposed optimization framework.
Article
In this paper, we study a multiuser mobile edge computing (MEC) network, where tasks from users can be partially offloaded to multiple computational access points (CAPs). We consider practical cases where task characteristics and computational capability at the CAPs may be time-varying, thus, creating a dynamic offloading problem. To deal with this problem, we first formulate it as a Markov decision process (MDP), and then introduce the state and action spaces. We further design a novel offloading strategy based on the deep Q network (DQN), where the users can dynamically fine-tune the offloading proportion in order to ensure the system performance measured by the latency and energy consumption. Simulation results are finally presented to verify the advantages of the proposed DQN-based offloading strategy over conventional ones.
Article
Low earth orbit (LEO) satellite networks can break through geographical restrictions and achieve global wireless coverage, which is an indispensable choice for future mobile communication systems. In this paper, we present a hybrid cloud and edge computing LEO satellite (CECLS) network with a three-tier computation architecture, which can provide ground users with heterogeneous computation resources and enable ground users to obtain computation services around the world. With the CECLS architecture, we investigate the computation offloading decisions to minimize the sum energy consumption of ground users, while satisfying the constraints in terms of the coverage time and the computation capability of each LEO satellite. The considered problem leads to a discrete and non-convex since the objective function and constraints contain binary variables, which makes it difficult to solve. To address this challenging problem, we convert the original non-convex problem into a linear programming problem by using the binary variables relaxation method. Then, we propose a distributed algorithm by leveraging the alternating direction method of multipliers (ADMM) to approximate the optimal solution with low computational complexity. Simulation results show that the proposed algorithm can effectively reduce the total energy consumption of ground users.
Article
Artificial intelligence (AI) based downlink channel state information (CSI) prediction for frequency division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems has attracted growing attention recently. However, existing works focus on the downlink CSI prediction for the users under a given environment and is hard to adapt to users in new environment especially when labeled data is limited. To address this issue, we formulate the downlink channel prediction as a deep transfer learning (DTL) problem, and propose the direct-transfer algorithm based on the fully-connected neural network architecture, where the network is trained in the manner of classical deep learning and is then fine-tuned for new environments. To further improve the transfer efficiency, we propose the meta-learning algorithm that trains the network by alternating inner-task and across-task updates and then adapts to a new environment with a small number of labeled data. Simulation results show that the direct-transfer algorithm achieves better performance than the deep learning algorithm, which implies that the transfer learning benefits the downlink channel prediction in new environments. Moreover, the meta-learning algorithm significantly outperforms the direct-transfer algorithm, which validates its effectiveness and superiority.
Article
Satellite-enabled army Internet of Things (SaIoT) has drawn increasing attention due to the wide-coverage and large-capacity transmission. However, the smart jamming based on artificial intelligence technologies has seriously degraded the SaIoT performance. Thus, this paper investigates distributed dynamic anti-jamming scheme for SaIoT to decrease the energy consumption in the jamming environment. Firstly, a hierarchical anti-jamming Stackelberg game (HASG), which consists of the leader sub-game for jammers and the follower sub-game for SaIoT devices, is proposed to formulate the confrontation interaction between jammers and SaIoT devices. It has been proved that there exists a Stackelberg equilibrium in the proposed HASG. Then, an anti-jamming coalition formation game (CFG) is proposed for the follower sub-game to decrease the energy consumption in the jamming environment, and the modified coalition preference order and coalition change principle are put forward to enhance the performance of the proposed anti-jamming CFG. Furthermore, with the help of exact potential game, we have demonstrated that the proposed anti-jamming CFG could converge to the stable coalition formation and it is able to achieve a similar performance to the centralized optimization via a distributed approach. Finally, reinforcement learning based algorithms are utilized to obtain the suboptimal anti-jamming policies according to the dynamic and unknown jamming environment, and simulation results validate that the proposed approach achieves better performance than the existing approaches.
Article
Vehicular ad hoc network (VANET) plays a promising role to support diverse intelligent transportation system (ITS) applications in 5G networks, namely 5G-VANET. There are three types of communication modes in 5G-VANET: cellular mode, reuse mode and dedicated mode, i.e., vehicle users (VUEs) communicates with each other using the cellular network spectrum directly, in an underlay sharing way, and utilizing the allocated dedicated spectrum, respectively. However, how to dynamically share the multi-mode spectrum to optimize the network performance (i.e., network throughput) in 5G-VANET is a challenging task due to the high dynamic VANET environment and network resource heterogeneity. In this paper, we propose a dynamic Stackelberg pricing game enabled multi-mode spectrum sharing solution in 5G-VANET. In specific, we develop an access price strategy for different spectrum sharing modes considering the cellular BS's revenue and whole network throughput, while VUEs can select communication modes in a distributed way and dynamically change the selections through an evolutionary game. Through testing different traffic scenarios generated by SUMO, we demonstrate the effectiveness of the proposed algorithm. Specifically, the proposed algorithm can improve the total transmission rate of VANET by at least 20% compared with the random selection method.
Article
The anti-jamming communication of the heterogeneous Internet of Satellites (IoS) has drawn increasing attentions due to the smart jamming and high dynamics. This paper investigates a spatial anti-jamming scheme for IoS, with the aim of minimizing anti-jamming routing cost via Stackelberg game and reinforcement learning. Firstly, we formulate the routing anti-jamming problem as a hierarchical anti-jamming Stackelberg game. It has been proved that there is a Stackelberg equilibrium (SE) in the proposed game. Secondly, the spatial anti-jamming scheme for IoS consists of two stages: the available routing selection and the fast anti-jamming decision. To tackle the high dynamics caused by the unknown interrupts and the unexpected congestion, we propose a deep reinforcement learning based routing algorithm (DRLR) to obtain an available routing subset; Furthermore, to make a fast anti-jamming decision, we propose a fast response anti-jamming algorithm (FRA) based on the available routing subset. The user utilizes DRLR and FRA algorithms to empirically analyze the jammer's strategies and adaptively make an anti-jamming decision according to the dynamic and unknown jamming environment. Finally, the simulations have shown that the proposed algorithm has lower routing cost and better anti-jamming performance than existing approaches, and the anti-jamming policies converge to the SE.
Article
In this paper, we investigate communication and computation problem for industrial Internet of Things (IoT) networks, where there are K relays in the system which can help accomplish the computation task assisted by M computational access points (CAPs). In the industrial IoT networks, latency and energy consumption are two important metrics of interest to measure the system performance. To enhance the system performance, a three-hierarchical optimization framework is proposed to reduce the latency and energy consumption, which involves bandwidth allocation, offloading and relay selection. Specifically, we firstly optimize the bandwidth allocation by presenting three schemes for the second-hop wireless relaying. We then optimize the computation offloading based on discrete particle swarm optimization (DPSO) algorithm. We further present three relay selection criteria by taking into account the trade-off between the system performance and implementation complexity. Simulation results are finally demonstrated to show the effectiveness of the proposed three-hierarchical optimization framework.
Article
Unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) system is a prominent concept, where a UAV equipped with a MEC server is deployed to serve a number of terminal devices (TDs) of Internet of Things (IoT) in a finite period. In this paper, each TD has a certain latency-critical computation task in each time slot to complete. Three computation strategies can be available to each TD. First, each TD can operate local computing by itself. Second, each TD can partially offload task bits to the UAV for computing. Third, each TD can choose to offload task bits to access point (AP) via UAV relaying. We propose an optimization problem formulation that aims to minimize the total energy consumption including communication-related energy, computation-related energy and UAV flight energy by optimizing the bits allocation, time slot scheduling and power allocation as well as UAV trajectory design. As the formulated problem is nonconvex and difficult to find the optimal solution, we propose to solve the problem by two parts, and obtain the near optimal solution by the Lagrangian duality method and successive convex approximation (SCA) technique, respectively. By analysis, the proposed algorithm can be guaranteed to converge within a dozen of iterations. Finally, numerical results are given to validate the proposed algorithm, which is verified to be efficient and superior to the other benchmark cases.
Article
In a tactical mobile ad-hoc network (MANET), unmanned vehicles are deployed for surveillance and reconnaissance. They send multimedia data to a center node in real time. In this letter, we propose a centralized TDMA slot and power scheduling schemes which maximize energy efficiency (EE) considering Quality-of-Service (QoS) for the tactical MANET. We formulate this problem as a non-concave ratio optimization, and propose the optimal slot allocation and power control algorithms based on the Dinkelbach method and the concave-convex procedure. The performance of the proposed algorithm is verified by numerical results, which show a fairly good energy efficiency.
Article
Mobile edge computing (MEC) provides computational services at the edge of networks by offloading tasks from user equipments (UEs). This letter employs an unmanned aerial vehicle (UAV) as the edge computing server to execute offloaded tasks from the ground UEs. We jointly optimize user association, UAV trajectory, and uploading power of each UE to maximize sum bits offloaded from all UEs to the UAV, subject to energy constraint of the UAV and quality of service (QoS) of each UE. To address the non-convex optimization problem, we first decompose it into three subproblems that are solved with integer programming and successive convex optimization methods respectively. Then, we tackle the overall problem by the multi-variable iterative optimization algorithm. Simulations show that the proposed algorithm can achieve a better performance than other baseline schemes.
Article
Internet of things (IoT) computing offloading is a challenging issue, especially in remote areas where common edge/cloud infrastructure is unavailable. In this paper, we present a space-air-ground integrated network (SAGIN) edge/cloud computing architecture for offloading the computation-intensive applications considering remote energy-and computation-constraints, where flying unmanned aerial vehicles (UAVs) provide near-user edge computing and satellites provide access to the cloud computing. Firstly, for UAV edge servers, we propose a joint resource allocation and task scheduling approach to efficiently allocate the computing resources to virtual machines and schedule the offloaded tasks. Secondly, we investigate the computing offloading problem in SAGIN and propose a learning-based approach to learn the optimal offloading policy from the dynamic SAGIN environments. Specifically, we formulate the offloading decision making as a Markov decision process where the system state considers the network dynamics. To cope with the system dynamics and complexity, we propose a deep reinforcement learning-based computing offloading approach to learn the optimal offloading policy on-the-fly, where we adopt the policy gradient method to handle the large action space and actor-critic method to accelerate the learning process. Simulation results show that the proposed edge virtual machine allocation and task scheduling approach can achieve near-optimal performance with very low complexity, and that the proposed learning-based computing offloading algorithm not only converges fast, but also achieves a lower total cost compared with other offloading approaches.
Article
Unmanned aerial vehicles (UAVs) have been considered in wireless communication systems to provide high-quality services for their low cost and high maneuverability. This paper addresses a UAV-aided mobile edge computing system, where a number of ground users are served by a moving UAV equipped with computing resources. Each user has computing tasks to complete, which can be separated into two parts: one portion is offloaded to the UAV and the remaining part is implemented locally. The UAV moves around above the ground users and provides computing service in an orthogonal multiple access manner over time. For each time period, we aim to minimize the sum of the maximum delay among all the users in each time slot by jointly optimizing the UAV trajectory, the ratio of offloading tasks, and the user scheduling variables, subject to the discrete binary constraints, the energy consumption constraints, and the UAV trajectory constraints. This problem has highly nonconvex objective function and constraints. Therefore, we equivalently convert it into a better tractable form based on introducing the auxiliary variables, and then propose a novel penalty dual decomposition-based algorithm to handle the resulting problem. Furthermore, we develop a simplified l 0 -norm algorithm with much reduced complexity. Besides, we also extend our algorithm to minimize the average delay. Simulation results illustrate that the proposed algorithms significantly outperform the benchmarks.
Article
The ever-increasing mobile data demands have posed significant challenges in the current radio access networks, while the emerging computation-heavy Internet of things (IoT) applications with varied requirements demand more flexibility and resilience from the cloud/edge computing architecture. In this article, to address the issues, we propose a novel air-ground integrated mobile edge network (AGMEN), where UAVs are flexibly deployed and scheduled, and assist the communication, caching, and computing of the edge network. In specific, we present the detailed architecture of AGMEN, and investigate the benefits and application scenarios of drone-cells, and UAV-assisted edge caching and computing. Furthermore, the challenging issues in AGMEN are discussed, and potential research directions are highlighted.
Article
In conventional terrestrial cellular networks, mobile terminals (MTs) at the cell edge often pose a performance bottleneck due to their long distances from the serving ground base station (GBS), especially in hotspot period when the GBS is heavily loaded. This paper proposes a new hybrid network architecture by leveraging the use of unmanned aerial vehicle (UAV) as an aerial mobile base station, which flies cyclically along the cell edge to offload data traffic for cell-edge MTs. We aim to maximize the minimum throughput of all MTs by jointly optimizing the UAV’s trajectory, bandwidth allocation and user partitioning. We first consider orthogonal spectrum sharing between the UAV and GBS, and then extend to spectrum reuse where the total bandwidth is shared by both the GBS and UAV with their mutual interference effectively avoided. Numerical results show that the proposed hybrid network with optimized spectrum sharing and cyclical multiple access design significantly improves the spatial throughput over the conventional GBSonly network; while the spectrum reuse scheme provides further throughput gains at the cost of slightly higher complexity for interference control. Moreover, compared to the conventional small-cell offloading scheme, the proposed UAV offloading scheme is shown to outperform in terms of throughput, besides saving the infrastructure cost.
Article
This letter investigates the transmit power and trajectory optimization problem for unmanned aerial vehicle (UAV)-aided networks. Different from majority of the existing studies with fixed communication infrastructure, a dynamic scenario is considered where a flying UAV provides wireless services for multiple ground nodes simultaneously. To fully exploit the controllable channel variations provided by the UAV’s mobility, the UAV’s transmit power and trajectory are jointly optimized to maximize the minimum average throughput within a given time length. For the formulated non-convex optimization with power budget and trajectory constraints, this letter presents an efficient joint transmit power and trajectory optimization algorithm. Simulation results validate the effectiveness of the proposed algorithm and reveal that the optimized transmit power shows a water-filling characteristic in spatial domain.
Article
Mobile-edge computing (MEC) is an emerging paradigm to meet the ever-increasing computation demands from mobile applications. By offloading the computationally intensive workloads to the MEC server, the quality of computation experience, e.g., the execution latency, could be greatly improved. Nevertheless, as the on-device battery capacities are limited, computation would be interrupted when the battery energy runs out. To provide satisfactory computation performance as well as achieving green computing, it is of significant importance to seek renewable energy sources to power mobile devices via energy harvesting (EH) technologies. In this paper, we will investigate a green MEC system with EH devices and develop an effective computation offloading strategy. The execution cost, which addresses both the execution latency and task failure, is adopted as the performance metric. A low-complexity online algorithm, namely, the Lyapunov optimization-based dynamic computation offloading (LODCO) algorithm is proposed, which jointly decides the offloading decision, the CPU-cycle frequencies for mobile execution, and the transmit power for computation offloading. A unique advantage of this algorithm is that the decisions depend only on the instantaneous side information without requiring distribution information of the computation task request, the wireless channel, and EH processes. The implementation of the algorithm only requires to solve a deterministic problem in each time slot, for which the optimal solution can be obtained either in closed form or by bisection search. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Sample simulation results shall be presented to verify the theoretical analysis as well as validate the effectiveness of the proposed algorithm.
Article
This paper investigates the performance of a cog-nitive hybrid satellite terrestrial network, where the primary satellite communication network and the secondary terrestrial mobile network coexist provided that the interference temperature constraint is satisfied. By using the Meijer-G functions, the exact closed-form expression of the outage probability (OP) for the secondary network is first derived. Then, the asymptotic result in high signal-to-noise ratio (SNR) regime is presented to reveal the diversity order and coding gain of the considered system. Finally, computer simulations are carried out to confirm the theoretical results, and reveal that a looser interference constraint or a heavier shadowing severity of satellite interference link leads to a reduced outage probability, while a stronger satellite interference power poses a detrimental effect on the outage performance.
Article
This unified treatment of game theory focuses on finding state-of-the-art solutions to issues surrounding the next generation of wireless and communications networks. Future networks will rely on autonomous and distributed architectures to improve the efficiency and flexibility of mobile applications, and game theory provides the ideal framework for designing efficient and robust distributed algorithms. This book enables readers to develop a solid understanding of game theory, its applications and its use as an effective tool for addressing wireless communication and networking problems. The key results and tools of game theory are covered, as are various real-world technologies including 3G networks, wireless LANs, sensor networks, dynamic spectrum access and cognitive networks. The book also covers a wide range of techniques for modeling, designing and analysing communication networks using game theory, as well as state-of-the-art distributed design techniques. This is an ideal resource for communications engineers, researchers, and graduate and undergraduate students.
Article
This article investigates the problem of distributed channel selection in opportunistic spectrum access (OSA) networks with partially overlapping channels (POC) using a game-theoretic learning algorithm. Compared with traditional non-overlapping channels (NOC), POC can increase the full-range spectrum utilization, mitigate interference and improve the network throughput. However, most existing POC approaches are centralized, which are not suitable for distributed OSA networks. We formulate the POC selection problem as an interference mitigation game. We prove that the game has at least one pure strategy NE point and the best pure strategy NE point minimizes the aggregate interference in the network. We characterize the achievable performance of the game by presenting an upper bound for aggregate interference of all NE points. In addition, we propose a simultaneous uncoupled learning algorithm with heterogeneous exploration rates to achieve the pure strategy NE points of the game. Simulation results show that the heterogeneous exploration rates lead to faster convergence speed and the throughput improvement gain of the proposed POC approach over traditional NOC approach is significant. Also, the proposed uncoupled learning algorithm achieves satisfactory performance when compared with existing coupled and uncoupled algorithms.
Article
We investigate the problem of achieving global optimization for distributed channel selections in cognitive radio networks (CRNs), using game theoretic solutions. To cope with the lack of centralized control and local influences, we propose two special cases of local interaction game to study this problem. The first is local altruistic game, in which each user considers the payoffs of itself as well as its neighbors rather than considering itself only. The second is local congestion game, in which each user minimizes the number of competing neighbors. It is shown that with the proposed games, global optimization is achieved with local information. Specifically, the local altruistic game maximizes the network throughput and the local congestion game minimizes the network collision level. Also, the concurrent spatial adaptive play (C-SAP), which is an extension of the existing spatial adaptive play (SAP), is proposed to achieve the global optimum both autonomously as well as rapidly.
Article
In this paper, we describe the results of a channel measurement and modeling campaign for the vehicle-to-vehicle (V2V) channel in the 5-GHz band. We describe measurements and results for delay spread, amplitude statistics, and correlations for multiple V2V environments. We also discuss considerations used in developing statistical channel models for these environments and provide some sample results. Several statistical channel models are presented, and using simulation results, we elucidate tradeoffs between model implementation complexity and fidelity. The channel models presented should be useful for system designers in future V2V communication systems.
Space-air-ground integrated networks: Review and prospect
  • X Shen