Conference Paper

Energy-Efficient D2D-Aided Fog Computing under Probabilistic Time Constraints

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Under the strict delay requirements, the system's energy consumption is minimized by optimizing resource allocation and sub-channel allocation. In [20], the author studied the device-to-device (D2D) assisted fog computing scenario in minimizing the system's energy consumption under the probabilistic constraint of task processing time. The above researches are to reduce the system's energy consumption through different optimization methods in various application scenarios. ...
Article
Full-text available
Federated learning (FL) is an emerging artificial intelligence (AI) basic technology. It is essentially a distributed machine learning (ML) that allows the client to perform model training locally and then upload the trained model parameters to the server while leaving the original data locally, which guarantees the client’s privacy and significantly reduces communication pressure. This paper combines non-orthogonal multiple access (NOMA) for optimizing bandwidth allocation and FL to study a novel energy-efficient FL system which can effectively reduce energy consumption under the premise of ensuring user privacy. The considered model uses clustering for transmission between clients and the base station (BS). NOMA is used inside the cluster to transmit information to BS, and frequency division multiple access (FDMA) is used between the clusters to eliminate the interference between the user clusters caused by the clustering. We combine communication and computing design to minimize the system’s total energy consumption. Since the optimization problem is non-convex, it is first transformed into a Lagrangian function, and the original problem is divided into three sub-problems. Then the Karush–Kuhn–Tucker (KKT) conditions and Successive Convex Approximation (SCA) method are used to solve each sub-problem. Simulation analysis shows that our proposed novel energy-efficient FL method design has significantly improved the performance compared with other benchmarks.
... The total computing resources allocated to all tasks must not exceed the amount of resources presently available, as specified by Constraint 5 [23]. We denote EA d (t) as the available energy resource of IE d in time slot t, and Constraint 6 indicates that the total energy resource allocated to all tasks must not exceed the number of resources currently available of IE d [34]. The energy resources of MEC servers are unlimited, indicating that there are no constraints on their energy resources of MEC servers. ...
Article
Full-text available
The rapid growth of the Internet of Things (IoT) has resulted in the development of intelligent industrial systems known as Industrial IoT (IIoT). These systems integrate smart devices, sensors, cameras, and 5G technologies to enable automated data gathering and analysis boost production efficiency and overcome scalability issues. However, IoT devices have limited computer power, memory, and battery capacities. To address these challenges, mobile edge computing (MEC) has been introduced to IIoT systems to reduce the computational burden on the devices. While the dedicated MEC paradigm limits optimal resource utilization and load balancing, the MEC federation can potentially overcome these drawbacks. However, previous studies have relied on idealized assumptions when developing optimal models, raising concerns about their practical applicability. In this study, we investigated the joint decision offloading and resource allocation problem for MEC federation in the IIoT. Specifically, an optimization model was constructed based on all real-world factors influencing system performance. To minimize the total energy delay cost, the original problem was transformed into a Markov decision process. Considering task generation dynamics and continuity, we addressed the Markov decision process using a deep reinforcement learning method. We propose a deep deterministic policy gradient algorithm with prioritized experience replay (DDPG-PER)-based resource allocation that can handle high-dimensional continuity of action and state spaces. The simulation results indicate that the proposed approach effectively minimizes the energy-delay costs associated with tasks.
Article
Fog computing can deliver low delay and advanced IT services to end users with substantially reduced energy consumption. Nevertheless, with soaring demands for resource service and the limited capability of fog nodes, how to allocate and manage fog computing resources properly and stably has become the bottleneck. Therefore, the paper investigates the utility optimization-based resource allocation problem between fog nodes and end users in fog computing. The authors first introduce four types of utility functions due to the diverse tasks executed by end users and build the resource allocation model aiming at utility maximization. Then, for only the elastic tasks, the convex optimization method is applied to obtain the optimal results; for the elastic and inelastic tasks, with the assistance of Jensen’s inequality, the primal non-convex model is approximated to a sequence of equivalent convex optimization problems using successive approximation method. Moreover, a two-layer algorithm is proposed that globally converges to an optimal solution of the original problem. Finally, numerical simulation results demonstrate its superior performance and effectiveness. Comparing with other works, the authors emphasize the analysis for non-convex optimization problems and the diversity of tasks in fog computing resource allocation.
Article
Full-text available
To satisfy the increasing demand of mobile data traffic and meet the stringent requirements of the emerging Internet of Things (IoT) applications such as smart city, healthcare, augmented/virtual reality (AR/VR), the fifth generation (5G) enabling technologies are proposed and utilized in networks. As an emerging key technology of 5G and a key enabler of IoT, Multi-access edge computing (MEC), which integrates telecommunication and IT services, offers cloud computing capabilities at the edge of the radio access network (RAN). By providing computational and storage resources at the edge, MEC can reduce latency for end users. Hence, this paper investigates MEC for 5G and IoT comprehensively. It analyzes the main features of MEC in the context of 5G and IoT, and presents several fundamental key technologies which enable MEC to be applied in 5G and IoT, such as cloud computing, SDN/NFV, information centric networks, virtual machine (VM) and containers, smart devices, network slicing, and computation offloading. In addition, this paper provides an overview of the role of MEC in 5G and IoT, bringing light into the different MEC enabled 5G and IoT applications as well as the promising future directions of integrating MEC with 5G and IoT. Moreover, this paper further elaborates research challenges and open issues of MEC for 5G and IoT. Last but not least, we propose a use case that utilizes MEC to achieve edge intelligence in IoT scenarios.
Article
Full-text available
Internet of Things (IoT) aims to connect the real world made up of devices, sensors and actuators to the virtual world of Internet in order to interconnect devices with each other generating information from the gathered data. Devices, in general, have limited computational power and limited storage capacity. Cloud Computing (CC) has virtually unlimited capacity in terms of storage and computing power, and is based on sharing resources. Therefore, the integration between IoT and CC seems to be one of the most promising solutions. In fact, many of the biggest companies that offer Cloud Services are focusing on the IoT world to offer services also in this direction to their users. In this paper we compare the three main Cloud Platforms (Amazon Web Services, Google Cloud Platform and Microsoft Azure) regarding to the services made available for the IoT. After describing the typical architecture of an IoT application, we map the Cloud-IoT Platforms services with this architecture analyzing the key points for each platform. At the same time, in order to conduct a comparative analysis of performance, we focus on a service made available by all platforms (MQTT middleware) building the reference scenarios and the metrics to be taken into account. Finally, we provide an overview of platform costs based on different loads. The aim is not to declare a winner, but to provide a useful tool to developers to make an informed choice of a platform depending on the use case.
Article
Full-text available
In this paper, we investigate the allocation of resource in D2D-aided Fog computing system with multiple mobile user equipments (MUEs). We consider each MUE has a request for task from a task library and needs to make a decision on task performing with a selection of three processing modes which include local mode, Fog offloading mode and Cloud offloading mode. Two scenarios are considered in this paper, which mean task caching and its optimization in off-peak time, task offloading and its optimization in immediate time. In particular, task caching refers to cache the completed task application and its related data. In the first scenario, to maximize the average utility of MUEs, a task caching optimization problem is formulated with stochastic theory and is solved by a GA-based task caching algorithm. In the second scenario, to maximize the total utility of system, the task offloading and resource optimization problem is formulated as a mixed integer nonlinear programming problem (MINLP) with a joint consideration of the MUE allocation policy, task offloading policy, and computational resource allocation policy. Due to the nonconvex of the problem, we transform it into multi-MUEs association problem (MMAP) and mixed Fog/Cloud task offloading optimization problem (MFCOOP). The former problem is solved by a Gini coefficient-based MUEs allocation algorithm which can select the most proper MUEs who contribute more to the total utility. The task offloading optimization problem is proved as a potential game and solved by a distributed algorithm with Lagrange multiplier. At last, simulations show the effectiveness of the proposed scheme with the comparison of other baseline schemes.
Article
Full-text available
With the proliferation of computation-extensive and latency-critical applications in the 5G and beyond networks, mobile-edge computing (MEC) or fog computing, which provides cloud-clone computation and/or storage capabilities at the network edge, is envisioned to reduce computation latency as well as conserve energy for wireless devices (WDs). This paper studies a novel device-to-device (D2D)-enabled multi-helper MEC system, in which a local user offloads its computation tasks to multiple helpers for cooperative computation. We assume a time division multiple access (TDMA) transmission protocol, under which the local user offloads the tasks to multiple helpers and downloads the results from them over orthogonal pre-scheduled time slots. Under this setup, we minimize the computation latency by optimizing the local user's task assignment jointly with the time and rate for task offloading and results downloading, as well as the computation frequency for task execution, subject to individual energy and computation capacity constraints at the local user and the helpers. However, the formulated problem is a mixed-integer non-linear program (MINLP) that is difficult to solve. To tackle this challenge, we propose an efficient algorithm by first relaxing the original problem into a convex one, and then constructing suboptimal task assignment based on the obtained optimal solution. Next, we consider a benchmark scheme that endows the WDs with their maximum computation capacities. To further reduce the implementation complexity, we also develop a heuristic scheme based on the greedy task assignment. Finally, numerical results validate the effectiveness of our proposed algorithm, as compared against the heuristic scheme and other benchmark ones without joint optimization of radio and computation resources or without task assignment design.
Article
Full-text available
The computation task offloading and resource management in mobile edge computing (MEC) ABSTRACTbecome attractive in recent years. Many algorithms have been proposed to improve the performance of the MEC system. However, the research on power control in MEC systems is just starting. The power control in the single-user and interference-free multi-user MEC systems has been investigated; but in the interference-aware multi-user MEC systems, this issue has not been learned in detail. Therefore, a game theory based power control approach for the interference-aware multi-user MEC system is proposed in this paper. In this algorithm, both the interference and the multi-user scenario are taken into account. Moreover, the existence and uniqueness of the Nash Equilibrium (NE) of this game are proved, and the performance of this algorithm is evaluated via theoretical analysis and numerical simulation. The convergence, the computation complexity and the price of anarchy in terms of the system-wide computation overhead are investigated in detail. The performance of this algorithm has been compared with the traditional localized optimal algorithm by simulation. The simulation results demonstrate that the proposed algorithm has more advantages than the traditional one.
Article
Full-text available
In this paper, we propose D2D Fogging, a novel mobile task offloading framework based on network-assisted Device-to-Device (D2D) collaboration, where mobile users can dynamically and beneficially share the computation and communication resources among each other via the control assistance by the network operators. The purpose of D2D Fogging is to achieve energy efficient task executions for network wide users. To this end, we propose an optimization problem formulation that aims at minimizing the time-average energy consumption for task executions of all users, meanwhile taking into account the incentive constraints of preventing the over-exploiting and free-riding behaviors which harm user’s motivation for collaboration. To overcome the challenge that future system information such as user resource availability is difficult to predict, we develop an online task offloading algorithm which leverages Lyapunov optimization methods and utilizes the current system information only. As the critical building block, we devise corresponding efficient task scheduling policies in terms of three kinds of system settings in a time frame. Extensive simulation results demonstrate that the proposed online algorithm not only achieves superior performance (e.g., it reduces approximately 30%∼40% energy consumption compared with user local execution), but also adapts to various of situations in terms of task type, user amount and task frequency.
Article
Full-text available
We consider the problem of decomposing a multivariate polynomial as the difference of two convex polynomials. We introduce algebraic techniques which reduce this task to linear, second order cone, and semidefinite programming. This allows us to optimize over subsets of valid difference of convex decompositions (dcds) and find ones that speed up the convex–concave procedure. We prove, however, that optimizing over the entire set of dcds is NP-hard. © 2017 Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society
Article
Fog computing is envisioned as a promising approach for supporting emerging computation-intensive applications on capacity and battery constrained mobile Internet of Things (IoT) devices. Technically speaking, a massive crowd of devices in close proximity can be harvested and collaborate for computation and communication resource sharing. Hence fog computing enables significant potentials in low-latency and energy-efficient mobile task execution. However, without an efficient incentive mechanism to stimulate resource sharing among devices, the benefits of fog computing cannot be fully realized. Leveraging coalitional game theory, this work presents an efficient incentive mechanism to incentivize mutually-beneficial resource cooperation among the devices for collaborative task execution. In particular, to efficiently achieve mutually beneficial task execution, the proposed mechanism groups the devices into multiple micro computing clusters (MCCs). Within each MCC, devices can exchange mutually beneficial actions by helping to compute or transmit tasks, making all of their performances no worse than local execution or execution in the fog server. The solution to the MCC formation is devised by both centralized and decentralized schemes and further proven to admit nice properties such as top coalition, core solution, individual rationality and computational efficiency. Extensive numerical studies demonstrate the superior performance of our MCC formation mechanisms.
Article
The paper addresses the nonconvex nonsmooth optimization problem with the cost function and equality and inequality constraints given by d.c. functions. The original problem is reduced to a problem without constraints with the help of the exact penalization theory. After that, the penalized problem is represented as a d.c. minimization problem without constraints, for which the new mathematical tools under the form of global optimality conditions (GOCs) are developed. The GOCs reduce the nonconvex problem in question to a family of convex (linearized with respect to the basic nonconvexities) problems. In addition, the GOCs are related to some nonsmooth form of the KKT-theorem for the original problem. On the base of the developed theory we propose new numerical methods for local and global search.
Article
Fog computing is identified as a key enabler for using various emerging applications by battery powered and computationally constrained devices. In this paper, we consider devices that aim at improving their performance by choosing to offload their computational tasks to nearby devices or to an edge cloud. We develop a game theoretical model of the problem and use a variational inequality theory to compute an equilibrium task allocation in static mixed strategies. Based on the computed equilibrium strategy, we develop a decentralized algorithm for allocating the computational tasks among nearby devices and the edge cloud. We use the extensive simulations to provide insight into the performance of the proposed algorithm and compare its performance with the performance of a myopic best response algorithm that requires global knowledge of the system state. Despite the fact that the proposed algorithm relies on average system parameters only, our results show that it provides a good system performance close to that of the myopic best response algorithm.
Article
The rapid uptake of Internet-of-Things (IoT) devices imposes an unprecedented pressure for data communication and processing on the backbone network and the central cloud infrastructure. To overcome this issue, the recently advocated mobile-edge computing (MEC)-enabled IoT is promising. Meanwhile, driven by the growing social awareness of privacy, significant research efforts have been devoted to relevant issues in IoT; however, most of them mainly focus on the conventional cloud-based IoT. In this work, a new privacy vulnerability caused by the wireless offloading feature of MEC-enabled IoT is identified. To address this vulnerability, an effective privacy-aware offloading scheme is developed based on a newly proposed deep post-decision state (PDS)-learning algorithm. By exploiting extra prior information, the proposed deep PDS-learning algorithm allows the IoT devices to learn a good privacy-aware offloading strategy much faster than the conventional deep Q-network. Theoretic analysis and numerical results are provided to corroborate the correctness and the effectiveness of the proposed algorithm.
Article
The emergence of the Industrial Internet of Things (IIoT) has paved the way to real-time big data storage, access, and processing in the cloud environment. In IIoT, the big data generated by various devices such as-smartphones, wireless body sensors, and smart meters will be on the order of zettabytes in the near future. Hence, relaying this huge amount of data to the remote cloud platform for further processing can lead to severe network congestion. This in turn will result in latency issues which affect the overall QoS for various applications in IIoT. To cope with these challenges, a recent paradigm shift in computing, popularly known as edge computing, has emerged. Edge computing can be viewed as a complement to cloud computing rather than as a competition. The cooperation and interplay among cloud and edge devices can help to reduce energy consumption in addition to maintaining the QoS for various applications in the IIoT environment. However, a large number of migrations among edge devices and cloud servers leads to congestion in the underlying networks. Hence, to handle this problem, SDN, a recent programmable and scalable network paradigm, has emerged as a viable solution. Keeping focus on all the aforementioned issues, in this article, an SDN-based edge-cloud interplay is presented to handle streaming big data in IIoT environment, wherein SDN provides an efficient middleware support. In the proposed solution, a multi-objective evolutionary algorithm using Tchebycheff decomposition for flow scheduling and routing in SDN is presented. The proposed scheme is evaluated with respect to two optimization objectives, that is, the trade-off between energy efficiency and latency, and the trade-off between energy efficiency and bandwidth. The results obtained prove the effectiveness of the proposed flow scheduling scheme in the IIoT environment.
Article
As mobile system-on-chips incorporate multicore processors with high power densities, high chip temperatures are becoming a rising concern in mobile processors. Modern smartphones are limited in their cooling capabilities and employ CPU throttling mechanisms to avoid thermal emergencies by sacrificing performance. Traditional throttling techniques aim at achieving maximum utilization of the available thermal headroom so as to minimize the performance penalty at a given time. This letter demonstrates that such greedy techniques lead to fast elevation of temperature on other system components and cause substantially suboptimal performance over increased durations of phone activity. Through experiments on a commercial smartphone, we characterize the impact of application duration on throttling-induced performance loss and propose quality-of-service (QoS) tuning as an effective way of providing the mobile system user with consistent performance levels over extended application durations. The proposed QoS-aware frequency capping technique achieves up to 56% improvement in performance sustainability.
Article
Recent mobile devices adopt high-performance processors to support various functions. As a side effect, higher performance inevitably leads to power density increase, eventually resulting in thermal problems. In order to alleviate the thermal problems, off-the-shelf mobile devices rely on dynamic voltage-frequency scaling (DVFS)-based dynamic thermal management (DTM) schemes. Unfortunately, in the DVFS-based DTM schemes, an excessive number of DTM operations worsen not only performance but also power efficiency. In this paper, we propose a temperature-aware DVFS scheme for Android-based mobile devices to optimize power or performance depending on the option. We evaluate our scheme in the off-the-shelf mobile device. Our evaluation results show that our scheme saves energy consumption by 12.7%, on average, when we use the power optimizing option. Our scheme also enhances the performance by 6.3%, on average, by using the performance optimizing scheme, still reducing the energy consumption by 6.7%.
Article
A new conceptual and analytical vehicle for problems of temporal planning under uncertainty, involving determination of optimal (sequential) stochastic decision rules is defined and illustrated by means of a typical industrial example. The paper presents a method of attack which splits the problem into two non-linear (or linear) programming parts, (i) determining optimal probability distributions, (ii) approximating the optimal distributions as closely as possible by decision rules of prescribed form.
Article
We consider the problem of sum rate maximization with joint resource allocation and interference mitigation by multiantenna processing in wireless networks. The denominators in the users' signal-to-interference-plus-noise expressions are assumed to be representable in the form of matrix-based, concave interference functions. It is shown that the problem of interest for this system model can be readily rewritten as a minimization of a difference of convex functions. Based on this representation, an iterative algorithm with guaranteed convergence is employed to calculate possibly suboptimal solutions of the main problem, which is known to be NP-hard. The proposed technique enables achieving a large portion of the globally optimal sum rate. It is also very efficient and rather general in terms of allowing interesting extensions, compared with the related results from the literature.