Article

A Distributed Framework for Task Offloading in Edge Computing Networks of Arbitrary Topology

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

An important issue in an edge computing (EC) network is to increase the utilities of the end users concurrently accessing the computation resources. In this paper, we consider the task offloading in EC-enabled networks where the end users efficiently utilize the dispersed computation and communication resources in a multi-path multi-hop manner. We propose a binary optimization framework that generalizes multi-hop wireless EC task offloading as jointly making decisions of server selecting and traffic routing in networks of arbitrary topology ( JoSRAT ). We further develop an approximation algorithm JoSRAT that enables for a fully distributed implementation together with the worst-case performance guarantees. Interestingly, our proposed distributed algorithm achieves nearly optimal in the numerical evaluations, significantly outperforming the worst-case guarantees. The proposed algorithm also outperforms a widely-used heuristic, i.e., First Fit, in terms of computational time complexity, indicating the superior capability of the proposed framework.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... subject to φ ≥ 0 and (5),(7) hold. (8) Note that problem (8) is not convex in φ, and we do not explicitly impose any link or computation capacity constraints in (8) since they are incorporated in the cost functions already. ...
... In this section, we first establish a set of KKT necessary conditions for (8), and demonstrate by example that such necessary conditions could lead to sub-optimal solutions. Then, we propose a set of sufficient optimality conditions. ...
... Theorem 1: Let φ be feasible to (8), if for all i ∈ V , (d, m) ∈ S and j ∈ {0} ∪ O(i), ...
Preprint
Full-text available
Collaborative edge computing (CEC) is an emerging paradigm where heterogeneous edge devices collaborate to fulfill computation tasks, such as model training or video processing, by sharing communication and computation resources. Nevertheless, the optimal data/result routing and computation offloading strategy in CEC with arbitrary topology still remains an open problem. In this paper, we formulate the flow model of partial-offloading and multi-hop routing for arbitrarily divisible tasks, where each node individually decides its routing/offloading strategy. In contrast to most existing works, our model applies to tasks with non-negligible result size, and allows data sources to be distinct from the result destination. We propose a network-wide cost minimization problem with congestion-aware convex cost functions for communication and computation. Such convex cost covers various performance metrics and constraints, such as average queueing delay with limited processor capacity. Although the problem is non-convex, we provide necessary conditions and sufficient conditions for the global-optimal solution, and devise a fully distributed algorithm that converges to the optimum in polynomial time, allows asynchronous individual updating, and is adaptive to changes in task pattern. Numerical evaluation shows that our proposed method significantly outperforms other baseline algorithms in multiple network instances, especially in congested scenarios.
... 4) Edge-to-Edge: The edge-to-edge offloading is actually operated under the edge-edge collaboration manner, as discussed in Section II-D, which can alleviate the workload of some overloaded EN by offloading (or migrating) some workloads to a peer. The typical issues of the edge-to-edge offloading mainly include: (i) task scheduling, which can [72] Utility a) Offloading proportion determining; b) power allocation; c) energy harvesting; [73] Energy consumption a) Task-destination association; b) offloading decision; [74] Energy consumption a) Task-destination association; b) offloading decision; c) task ready time determining; [75] Utility a) Task-destination association; b) offloading decision; [76] Energy consumption a) Transmission power allocation; b) offloading decision; c) CPU clock allocation; [77] Latency, energy consumption a) Task-destination association; b) wireless channel allocation; c) computation capability allocation; [78] Energy consumption a) Task-destination association; b) computing capability allocation; ...
... "0" and "1" are the indicators of whether the task is offloaded or not. Generally, "0" means the whole task is processed locally, and "1" means it is offloaded to elsewhere [53], [75]. When the whole task is processed locally, the computing time, energy consumption, and the cost of processing task are determined by the local capacity. ...
Preprint
With the proliferation of the Internet of Things (IoT) and the wide penetration of wireless networks, the surging demand for data communications and computing calls for the emerging edge computing paradigm. By moving the services and functions located in the cloud to the proximity of users, edge computing can provide powerful communication, storage, networking, and communication capacity. The resource scheduling in edge computing, which is the key to the success of edge computing systems, has attracted increasing research interests. In this paper, we survey the state-of-the-art research findings to know the research progress in this field. Specifically, we present the architecture of edge computing, under which different collaborative manners for resource scheduling are discussed. Particularly, we introduce a unified model before summarizing the current works on resource scheduling from three research issues, including computation offloading, resource allocation, and resource provisioning. Based on two modes of operation, i.e., centralized and distributed modes, different techniques for resource scheduling are discussed and compared. Also, we summarize the main performance indicators based on the surveyed literature. To shed light on the significance of resource scheduling in real-world scenarios, we discuss several typical application scenarios involved in the research of resource scheduling in edge computing. Finally, we highlight some open research challenges yet to be addressed and outline several open issues as the future research direction.
... In recent years, data centers [1,2], cloud computing [3,4], edge computing [5,6], and the Internet of Things [7] have become research hotspots. The enhancement of computing power no longer simply relies on improving the processing capacity of a single computer, but rather, on increasing the number of computing devices to form a large-scale processing system. ...
Article
Full-text available
Bidirectional double-loop networks (BDLNs) are widely used in computer networks for their simplicity, symmetry and scalability. One common way to improve their performance is to decrease the diameter and average distance. Attempts have been made to find BDLNs with minimal diameters; however, such BDLNs will not necessarily have the minimum average distance. In this paper, we construct dual optimal BDLNs with minimum diameters and average distances using an efficient method based on coordinate embedding and transforming. First, we get the lower bounds of both the diameter and average distance by embedding a BDLN into Cartesian coordinates. Then, we construct tight optimal BDLNs that provide the aforementioned lower bounds based on an embedding graph. On the basis of node distribution regularity in tight optimal BDLNs, we construct dual optimal BDLNs with minimum diameters and average distances for any number of nodes. Finally, we present on-demand optimal message routing algorithms for the dual optimal BDLNs that we have constructed. The presented algorithms do not require routing tables and are efficient, requiring little computation.
... LCOR(Local Computation Optimal Routing): computes at the data sources (or with minimum offloading if pure local computation is not feasible), optimally route the result to destinations using scaled gradient projection in [25]. LPR(Linear Program Rounded): the joint path-routing and offloading method by [6], which does not consider partial offloading, congestible links and result flow. To adapt LPR's linear link costs to our schemes, we use the marginal cost at zero flow. ...
Preprint
Full-text available
Collaborative edge computing (CEC) is an emerging paradigm where heterogeneous edge devices (stakeholders) collaborate to fulfill computation tasks, such as model training or video processing, by sharing communication and computation resources. Nevertheless, the optimal data/result routing and computation offloading strategy in CEC with arbitrary topology still remains an open problem. In this paper, we formulate a partial-offloading and multi-hop routing model for arbitrarily divisible tasks. Each node individually decides the computation of the received data and the forwarding of data/result traffic. In contrast to most existing works, our model applies to tasks with non-negligible result size, and enables separable data sources and result destinations. We propose a network-wide cost minimization problem with congestion-aware cost to jointly optimize routing and computation offloading. This problem covers various performance metrics and constraints, such as average queueing delay with limited processor capacity. Although the problem is non-convex, we provide non-trivial necessary and sufficient conditions for the global-optimal solution, and devise a fully distributed algorithm that converges to the optimum in polynomial time, allows asynchronous individual updating, and is adaptive to changes in network topology or task pattern. Numerical evaluation shows that our proposed method significantly outperforms other baseline algorithms in multiple network instances, especially in congested scenarios.
Article
Full-text available
Edge computing has become popular in the last decade and will advance in future to support real-time actionable analytics at the devices. One of the fundamental problems for future edge computing is to make distributed resource scheduling (DRS) decisions both at the end devices and edge devices to support requirements including autonomous computation, scalability, low-latency, etc. Several surveys in the literature on edge computing have considered some aspects related to DRS, such as challenges and solution approaches, particularly for computation offloading and data management. However, to the best of our knowledge, there is no comprehensive survey on DRS in edge computing. This paper surveys the challenging issues, motivations, and existing works for enabling DRS in edge computing. We define and identify the unique issues for DRS in edge computing compared to traditional works on parallel and distributed systems. The motivations for DRS in edge computing have been described by pointing out the benefits and emerging application scenarios. This paper also provides a taxonomy to classify the existing works from three perspectives, i.e., systems, problems, and solution approaches. Finally, we have outlined several future directions that can help researchers to advance the state-of-the-art.
Conference Paper
The Internet of Things (IoT) has grown at a rapid pace in recent years. It requires a large amount of data and massive computational resources, thus the concept of Fog Computing (FC) has emerged. FC attempts to overcome network latency by bringing computational resources closer to IoT devices. One important part of FC is an offloading mechanism t o make proper decisions for better utilizing of FC node(s), especially for real-time (low latency and high throughput) applications. Generally, offloading policies a re categorized a s centralized and distributed. However, by growing numbers of IoT devices which leads to expansion of FC layer beyond the initial configurations, centralized scheduling solutions for time-sensitive tasks suffers from two major challenges: first, i ncreasing c omplexity, and second, non-fault tolerating. In order to address these issues, scalable decentralized/distributed approaches have been developed to schedule tasks through an autonomous collaboration between a small number of nodes (neighbors). Without a thorough picture of the network or nodes' state, it is difficult to design algorithms that make optimum decisions. This paper presents a scalable algorithm for offloading time-sensitive t asks through a semi-network aware distributed scheduling mechanism. Based on the evaluation results obtained for acceptance rate, response time, and network resource usage, the proposed method outperforms the state-of-the-art on average.
Article
Network function computation is investigated in the letter. In the model, a target function, of which the inputs are generated at multiple source nodes, is required to be computed with zero error at a sink node over a network. Toward this end, distributed coding by integrating communication and computation in networks is regarded as an efficient solution. We are interested in its fundamental computing capacity of importance in theory and applications. In the letter, we explicitly characterize the capacities of computing all the vector-linear functions over the diamond network. The diamond network has an important topology structure which not only is typical for many multi-terminal information-theoretic problems but also illustrates the combinatorial nature of the computing problem. By applying the computing capacities thus obtained, we solve the solvability problem of vector-linear functions over the diamond network. We determine all the solvable vector-linear functions and obtain an enhanced result that the remaining vector-linear functions are not only linearly non-solvable but also non-linearly non-solvable.
Article
This paper comprehensively investigates spatio-temporal dynamics for task offloading in the Internet of Things (IoT) Edge Network (iTEN) in order to maximize utility. Different from the previous works in the literature that only consider partially dynamic factors, this paper takes into account the time-varying wireless link quality, communication power, wireless interference on task offloading, and the spatiotemporal dynamics of energy harvested by terminals and their charging efficiency. Our goal is to maximize utility during the task offloading by considering the above-mentioned factors, which are relatively complex but closer to reality. This paper designs the Time-Expanded Graph (TEG) to transfer network dynamics and wireless interference into some static weight in the graph so as to devise the algorithm easily. This paper firstly devises the Single Terminal (ST) utility maximization algorithm on the basis of TEG when there is only one terminal. In the case of multiple terminals, it is very complicated to directly solve the utility maximization of the task offloading. This paper adopts the framework of Garg and Könemann and devises a multi-terminal algorithm (MT) to maximize the total utility of all terminals. MT is a fast approximate algorithm and its approximate ratio is 1-3ς, where 0<ς<1/3 is a positive small constant. The comprehensive experiments are conducted to illustrate that our algorithm significantly improves the overall utility compared to the three basic algorithms.
Chapter
This chapter elaborates the computation infrastructure, namely, cloud, fog, and edge computing. Various kinds of data offloading mechanisms and a general multi-layer computing framework addressing security aspects are presented. Presently, the smart Internet of Things (IoT) devices offload the large data aggregation and processing as well as storage to different computing platforms such as edge, fog, and cloud. In this chapter, various computing paradigms including their benefits and limitations are discussed. This chapter also discusses about the total cost in terms of latency and energy required to complete a task on user devices as well as remotely (on edge or cloud). Further, various necessary security and privacy issues are discussed that needs to be considered for large deployment of computing infrastructure for real-time ICT applications such as healthcare application. Finally, the chapter provides challenges and future directions for research in these computing paradigms, including security and privacy issues.
Article
Unmanned aerial vehicles (UAVs) are beginning to make a splash in emergency disaster scenarios owing to its excellent air mobility and flexibility. Considering that large base stations often cannot be deployed to disaster areas in the first place and the variation of communication links between UAVs, we formulate the task scheduling problem for disaster scenarios as a two-stage Lyapunov optimization problem and propose a dispersed computing network consisting of UAVs and ground mobile devices, which is used for collaborative computing. We decouple the long-term stability of the task queues of the nodes in the system in terms of time slots as a deterministic optimization problem by Lyapunov techniques. By jointly optimizing the task size transmitted from the control center to the UAVs, the task size computed locally and offloaded by the UAVs and mobile devices, the energy consumption of the dispersed computation system is minimized while ensuring the stability of the computation queues. The simulation results verify that our proposed algorithm is close to the optimal case in terms of queue stability, and our algorithm is able to reduce the system energy consumption by more than 50% compared to the local computation of UAVs.
Article
With the proliferation of the Internet of Things (IoT) and the wide penetration of wireless networks, the surging demand for data communications and computing calls for the emerging edge computing paradigm. By moving the services and functions located in the cloud to the proximity of users, edge computing can provide powerful communication, storage, networking, and communication capacity. The resource scheduling in edge computing, which is the key to the success of edge computing systems, has attracted increasing research interests. In this paper, we survey the state-of-the-art research findings to know the research progress in this field. Specifically, we present the architecture of edge computing, under which different collaborative manners for resource scheduling are discussed. Particularly, we introduce a unified model before summarizing the current works on resource scheduling from three research issues, including computation offloading, resource allocation, and resource provisioning. Based on two modes of operation, i.e., centralized and distributed modes, different techniques for resource scheduling are discussed and compared. Also, we summarize the main performance indicators based on the surveyed literature. To shed light on the significance of resource scheduling in real-world scenarios, we discuss several typical application scenarios involved in the research of resource scheduling in edge computing. Finally, we highlight some open research challenges yet to be addressed and outline several open issues as the future research direction.
Article
Full-text available
Processing data around the point of capture, fog computing can support computationally-demanding Internet-of-Things (IoT) services. Distributed online optimization is important given the size of IoT, but challenging due to time variations of random traffic and non-uniform connectivity (or cardinality) of edge servers and IoT devices. This paper presents a distributed online learning approach to asymptotically minimizing the time-average cost of fog computing in the absence of the a-priori knowledge on traffic randomness, for light-weight, delay-tolerant application scenarios. Stochastic gradient descent is exploited to decouple the optimizations between time slots. A graph matching problem is then formulated for every time slot by decoupling and unifying the non-uniform cardinalities, and solved in a distributed manner by developing a new linear 12-approximation method. We prove that the optimality loss resulting from the distributed approximate graph matching method can be compensated and diminish by increasing the learning time. Corroborated by simulations, the proposed distributed online learning is asymptotically optimal and superior to the state of the art in terms of throughput and energy efficiency.
Conference Paper
Full-text available
p>Mobile edge computing is an emerging technology to offer resource-intensive yet delay-sensitive applications from the edge of mobile networks, where a major challenge is to allocate limited edge resources to competing demands. While prior works often make a simplifying assumption that resources assigned to different users are non-sharable, this assumption does not hold for storage resources, where users interested in services (e.g., data analytics) based on the same set of data/code can share storage resource. Meanwhile, serving each user request also consumes non-sharable resources (e.g., CPU cycles, bandwidth). We study the optimal provisioning of edge services with non-trivial demands of both sharable (storage) and non-sharable (communication, computation) resources via joint service placement and request scheduling. In the homogeneous case, we show that while the problem is polynomial-time solvable without storage constraints, it is NP-hard even if each edge cloud has unlimited communication or computation resources. We further show that the hardness is caused by the service placement subproblem, while the request scheduling subproblem is polynomial-time solvable via maximum-flow algorithms. In the general case, both subproblems are NP-hard. We develop a constant-factor approximation algorithm for the homogeneous case and efficient heuristics for the general case. Our trace-driven simulations show that the proposed algorithms, especially the approximation algorithm, can achieve near-optimal performance, serving 2-3 times more requests than a baseline solution that optimizes service placement and request scheduling separately.</p
Article
Full-text available
We consider a general multi-user Mobile Cloud Computing (MCC) system where each mobile user has multiple independent tasks. These mobile users share the computation and communication resources while offloading tasks to the cloud. We study both the conventional MCC where tasks are offloaded to the cloud through a wireless access point, and MCC with a computing access point (CAP), where the CAP serves both as the network access gateway and a computation service provider to the mobile users. We aim to jointly optimize the offloading decisions of all users as well as the allocation of computation and communication resources, to minimize the overall cost of energy, computation, and delay for all users. The optimization problem is formulated as a non-convex quadratically constrained quadratic program, which is NP-hard in general. For the case without a CAP, an efficient approximate solution named MUMTO is proposed by using separable semidefinite relaxation (SDR), followed by recovery of the binary offloading decision and optimal allocation of the communication resource. To solve the more complicated problem with a CAP, we further propose an efficient three-step algorithm named MUMTO-C comprising of generalized MUMTO SDR with CAP, alternating optimization, and sequential tuning, which always computes a locally optimal solution. For performance benchmarking, we further present numerical lower bounds of the minimum system cost with and without the CAP. By comparison with this lower bound, our simulation results show that the proposed solutions for both scenarios give nearly optimal performance under various parameter settings, and the resultant efficient utilization of a CAP can bring substantial cost benefit.
Article
Full-text available
Fog computing enables resource-limited network devices to help each other with computationally demanding tasks, but has yet to be implemented in large scales due to sophisticated control and network inhomogeneity. This paper presents a new fully distributed online optimization to asymptotically minimize the time-average cost of fog computing, where tasks are selected to be offloaded and processed independently between different links and devices by measuring their cost effectiveness at each time slot. A key contribution is that we optimize the costeffectiveness measures which achieve the asymptotic optimality over infinite time. Another contribution is that we optimize placeholders at the devices; which create collaborative computing regions of tasks in the vicinity of the point of capture, prevent tasks being offloaded beyond, preserve the asymptotic optimality and reduce delay. This is achieved in a distributed fashion by discovering the optimal substructure of the placeholders. Simulations show that the average size of collaborative regions is only 3:2 out of total 500 servers, and the system income increases by 43% as compared to existing techniques.
Article
Full-text available
Task admission is critical to delay-sensitive applications in mobile edge computing, but technically challenging due to its combinatorial mixed nature and consequently limited scalability. We propose an asymptotically optimal task admission approach which is able to guarantee task delays and achieve (1 – ∊)-approximation of the computationally prohibitive maximum energy saving at a time-complexity linearly scaling with devices. ∊ is linear to the quantization interval of energy. The key idea is to transform the mixed integer programming of task admission to an integer programming (IP) problem with the optimal substructure by pre-admitting resource-restrained devices. Another important aspect is a new quantized dynamic programming algorithm which we develop to exploit the optimal substructure and solve the IP. The quantization interval of energy is optimized to achieve an [O(∊);O(1/∊)]-tradeoff between the optimality loss and time-complexity of the algorithm. Simulations show that our approach is able to dramatically enhance the scalability of task admission at a marginal cost of extra energy, as compared to the optimal branch and bound method, and can be efficiently implemented for online programming.
Article
Full-text available
Multimedia Internet-of-Things (IoT) systems have been widely used in surveillance, automatic behavior analysis and event recognition, which integrate image processing, computer vision and networking capabilities. In conventional multimedia IoT systems, videos captured by surveillance cameras are required to be delivered to remote IoT servers for video analysis. However, the long-distance transmission of a large volume of video chunks may cause congestions and delays due to limited network bandwidth. Nowadays, mobile devices, e.g., smart phones and tablets, are resource-abundant in computation and communication capabilities. Thus, these devices have the potential to extract features from videos for the remote IoT servers. By sending back only a few video features to the remote servers, the bandwidth starvation of delivering original video chunks can be avoided. In this paper, we propose an edge computing framework to enable cooperative processing on resource-abundant mobile devices for delay-sensitive multimedia IoT tasks. We identify that the key challenges in the proposed edge computing framework are to optimally form mobile devices into video processing groups and to dispatch video chunks to proper video processing groups. Based on the derived optimal matching theorem, we put forward a cooperative video processing scheme formed by two efficient algorithms to tackle above challenges, which achieves sub-optimal performance on the human detection accuracy. The proposed scheme has been evaluated under diverse parameter settings. Extensive simulation confirms the superiority of the proposed scheme over other two baseline schemes.
Article
Full-text available
Mobile edge computing (MEC) is of particular interest to Internet of Things (IoT), where inexpensive simple devices can get complex tasks offloaded to and processed at powerful infrastructure. Scheduling is challenging due to stochastic task arrivals and wireless channels, congested air interface, and more prominently, prohibitive feedbacks from thousands of devices. In this paper, we generate asymptotically optimal schedules tolerant to out-of-date network knowledge, thereby relieving stringent requirements on feedbacks. A perturbed Lyapunov function is designed to stochastically maximize a network utility balancing throughput and fairness. A knapsack problem is solved per slot for the optimal schedule, provided up-to-date knowledge on the data and energy backlogs of all devices. The knapsack problem is relaxed to accommodate out-of-date network states. Encapsulating the optimal schedule under up-to-date network knowledge, the solution under partial out-of-date knowledge preserves asymptotic optimality, and allows devices to self-nominate for feedback. Corroborated by simulations, our approach is able to dramatically reduce feedbacks at no cost of optimality. The number of devices that need to feed back is reduced to less than 60 out of a total of 5000 IoT devices.
Article
Full-text available
In this paper, we consider a multi-user mobile edge computing (MEC) network powered by wireless power transfer (WPT), where each energy-harvesting WD follows a binary computation offloading policy, i.e., data set of a task has to be executed as a whole either locally or remotely at the MEC server via task offloading. In particular, we are interested in maximizing the (weighted) sum computation rate of all the WDs in the network by jointly optimizing the individual computing mode selection (i.e., local computing or offloading) and the system transmission time allocation (on WPT and task offloading). The major difficulty lies in the combinatorial nature of multi-user computing mode selection and its strong coupling with transmission time allocation. To tackle this problem, we first consider a decoupled optimization, where we assume that the mode selection is given and propose a simple bi-section search algorithm to obtain the conditional optimal time allocation. On top of that, a coordinate descent method is devised to optimize the mode selection. The method is simple in implementation but may suffer from high computational complexity in a large-size network. To address this problem, we further propose a joint optimization method based on the ADMM (alternating direction method of multipliers) decomposition technique, which enjoys much slower increase of computational complexity as the networks size increases. Extensive simulations show that both the proposed methods can efficiently achieve near-optimal performance under various network setups, and significantly outperform the other representative benchmark methods considered.
Article
Full-text available
Driven by the visions of Internet of Things and 5G communications, recent years have seen a paradigm shift in mobile computing, from the centralized Mobile Cloud Computing towards Mobile Edge Computing (MEC). The main feature of MEC is to push mobile computing, network control and storage to the network edges (e.g., base stations and access points) so as to enable computation-intensive and latency-critical applications at the resource-limited mobile devices. MEC promises dramatic reduction in latency and mobile energy consumption, tackling the key challenges for materializing 5G vision. The promised gains of MEC have motivated extensive efforts in both academia and industry on developing the technology. A main thrust of MEC research is to seamlessly merge the two disciplines of wireless communications and mobile computing, resulting in a wide-range of new designs ranging from techniques for computation offloading to network architectures. This paper provides a comprehensive survey of the state-of-the-art MEC research with a focus on joint radio-and-computational resource management. We also discuss a set of issues, challenges and future research directions for MEC research, including MEC system deployment, cache-enabled MEC, mobility management for MEC, green MEC, as well as privacy-aware MEC. Advancements in these directions will facilitate the transformation of MEC from theory to practice. Finally, we introduce recent standardization efforts on MEC as well as some typical MEC application scenarios.
Article
Full-text available
Recently, many applications (e.g., wearable cognitive assistance) with the devices of Internet of Things (IoT) (e.g., Apple Watch, Google Glass) have been fast developed. However, the current Internet may not be suitable for the future IoT applications due to the limited capabilities of the data caching and content processing services with the existing Internet architecture. In this paper, we extend the Named Data Networking (NDN) and develop the object-oriented network (OON) as a novel Internet architecture to implement both the native data caching and content processing in the network layer. The datagrams with processable payloads as well as the cached contents are both referred to as the operable objects in OON for abstraction. With the proposed OON architecture, operable objects can be processed and transmitted by forwarding them to the subroutines of content processing programs and the interfaces of content deliveries, respectively, according to the proposed naming rules. For performance evaluation, we implement the dynamic adaptive multimedia streaming application atop the proposed OON architecture in ns-3. Our simulation results show that the proposed OON architecture can effectively increase the potential quality of experience (QoE) for mobile users.
Article
Full-text available
Integrating mobile-edge computing (MEC) and wireless power transfer (WPT) is a promising technique in the Internet of Things (IoT) era. It can provide massive lowpower mobile devices with enhanced computation capability and sustainable energy supply. In this paper, we consider a wireless powered multiuser MEC system, where a multi-antenna access point (AP) (integrated with an MEC server) broadcasts wireless power to charge multiple users and each user node relies on the harvested energy to execute latency-sensitive computation tasks. With MEC, these users can execute their respective tasks locally by themselves or offload all or part of the tasks to the AP based on a time division multiple access (TDMA) protocol. Under this setup, we pursue an energy-efficient wireless powered MEC system design by jointly optimizing the transmit energy beamformer at the AP, the central processing unit (CPU) frequency and the offloaded bits at each user, as well as the time allocation among different users. In particular, we minimize the energy consumption at the AP over a particular time block subject to the computation latency and energy harvesting constraints per user. By formulating this problem into a convex framework and employing the Lagrange duality method, we obtain its optimal solution in a semi-closed form. Numerical results demonstrate the benefit of the proposed joint design over alternative benchmark schemes in terms of the achieved energy efficiency.
Article
Full-text available
Mobile-edge cloud computing is a new paradigm to provide cloud computing capabilities at the edge of pervasive radio access networks in close proximity to mobile users. In this paper, we first study the multi-user computation offloading problem for mobile-edge cloud computing in a multi-channel wireless interference environment. We show that it is NP-hard to compute a centralized optimal solution, and hence adopt a game theoretic approach for achieving efficient computation offloading in a distributed manner. We formulate the distributed computation offloading decision making problem among mobile device users as a multi-user computation offloading game. We analyze the structural property of the game and show that the game admits a Nash equilibrium and possesses the finite improvement property. We then design a distributed computation offloading algorithm that can achieve a Nash equilibrium, derive the upper bound of the convergence time, and quantify its efficiency ratio over the centralized optimal solutions in terms of two important performance metrics. We further extend our study to the scenario of multi-user computation offloading in the multi-channel wireless contention environment. Numerical results corroborate that the proposed algorithm can achieve superior computation offloading performance and scale well as the user size increases.
Article
Full-text available
We describe the architecture and prototype implementation of an assistive system based on Google Glass devices for users in cognitive decline. It combines the first-person image capture and sensing capabilities of Glass with remote processing to perform real-time scene interpretation. The system architecture is multi-tiered. It offers tight end-to-end latency bounds on compute-intensive operations, while addressing concerns such as limited battery capacity and limited processing capability of wearable devices. The system gracefully degrades services in the face of network failures and unavailability of distant architectural tiers.
Article
Full-text available
We study the design of nearly-linear-time algorithms for approximately solving positive linear programs (see [LN, STOC'93] [BBR, FOCS'97] [You, STOC'01] [KY, FOCS'07] [AK, STOC'08]). Both the parallel and the sequential deterministic versions of these algorithms require $\tilde{O}(\varepsilon^{-4})$ iterations, a dependence that has not been improved since the introduction of these methods in 1993. Moreover, previous algorithms and their analyses rely on update steps and convergence arguments that are combinatorial in nature, but do not seem to arise naturally from an optimization viewpoint. In this paper, we leverage insights from optimization theory to construct a novel algorithm that breaks the longstanding $\tilde{O}(\varepsilon^{-4})$ barrier. Our algorithm has a simple analysis and a clear motivation. Our work introduces a number of novel techniques, such as the combined application of gradient descent and mirror descent, and a truncated, smoothed version of the standard multiplicative update, which may be of independent interest.
Article
Full-text available
Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analyses are usually very similar and rely on an exponential potential function. In this survey we present a simple meta-algorithm that unifies many of these disparate algorithms and derives them as simple instantiations of the meta-algorithm. We feel that since this meta-algorithm and its analysis are so simple, and its applications so broad, it should be a standard part of algorithms courses, like “divide and conquer.”
Article
Full-text available
LetN be a finite set andz be a real-valued function defined on the set of subsets ofN that satisfies z(S)+z(T)≥z(S⋃T)+z(S⋂T) for allS, T inN. Such a function is called submodular. We consider the problem maxS⊂N{a(S):|S|≤K,z(S) submodular}. Several hard combinatorial optimization problems can be posed in this framework. For example, the problem of finding a maximum weight independent set in a matroid, when the elements of the matroid are colored and the elements of the independent set can have no more thanK colors, is in this class. The uncapacitated location problem is a special case of this matroid optimization problem. We analyze greedy and local improvement heuristics and a linear programming relaxation for this problem. Our results are worst case bounds on the quality of the approximations. For example, whenz(S) is nondecreasing andz(0) = 0, we show that a “greedy” heuristic always produces a solution whose value is at least 1 −[(K − 1)/K] K times the optimal value. This bound can be achieved for eachK and has a limiting value of (e − 1)/e, where e is the base of the natural logarithm.
Book
Full-text available
Every aspect of human life is crucially determined by the result of decisions. Whereas private decisions may be based on emotions or personal taste, the complex professional environment of the 21st century requires a decision process which can be formalized and validated independently from the involved individuals. Therefore, a quantitative formulation of all factors influencing a decision and also of the result of the decision process is sought.
Book
This volume bears on wireless network modeling and performance analysis. The aim is to show how stochastic geometry can be used in a more or less systematic way to analyze the phenomena that arise in this context. It first focuses on medium access control mechanisms used in ad hoc networks and in cellular networks. It then discusses the use of stochastic geometry for the quantitative analysis of routing algorithms in mobile ad hoc networks. The appendix also contains a concise summary of wireless communication principles and of the network architectures considered in the two volumes.
Article
Mobile-edge computing (MEC) has recently emerged as a cost-effective paradigm to enhance the computing capability of hardware-constrained wireless devices (WDs). In this paper, we first consider a two-user MEC network, where each WD has a sequence of tasks to execute. In particular, we consider task dependency between the two WDs, where the input of a task at one WD requires the final task output at the other WD. Under the considered task-dependency model, we study the optimal task offloading policy and resource allocation (e.g., on offloading transmit power and local CPU frequencies) that minimize the weighted sum of the WDs’ energy consumption and task execution time. The problem is challenging due to the combinatorial nature of the offloading decisions among all tasks and the strong coupling with resource allocation. To tackle this problem, we first assume that the offloading decisions are given and derive the closed-form expressions of the optimal offloading transmit power and local CPU frequencies. Then, an efficient bi-section search method is proposed to obtain the optimal solutions. Furthermore, we prove that the optimal offloading decisions follow an one-climb policy, based on which a reduced-complexity Gibbs Sampling algorithm is proposed to obtain the optimal offloading decisions. We then extend the investigation to a general multi-user scenario, where the input of a task at one WD requires the final task outputs from multiple other WDs. Numerical results show that the proposed method can significantly outperform the other representative benchmarks and efficiently achieve low complexity with respect to the call graph size.
Article
In-network caching constitutes a promising approach to reduce traffic loads and alleviate congestion in both wired and wireless networks. In this paper, we study the joint caching and routing problem in congestible networks of arbitrary topology (JoCRAT) as a generalization of previous efforts in this particular field. We show that JoCRAT extends many previous problems in the caching literature that are intractable even with specific topologies and/or assumed unlimited bandwidth of communications. To handle this significant but challenging problem, we develop a novel approximation algorithm with guaranteed performance bound based on a randomized rounding technique. Evaluation results demonstrate that our proposed algorithm achieves near-optimal performance over a broad array of synthetic and real networks, while significantly outperforming the state-of-the-art methods.
Article
A coflow is a collection of related parallel flows that occur typically between two stages of a multi-stage computing task in a network, such as shuffle flows in MapReduce. The coflow abstraction allows applications to convey their semantics to the network so that application-level requirements can be better satisfied. In this paper, we study the routing and scheduling of multiple coflows to minimize the total weighted coflow completion time (CCT). We first propose a rounding-based randomized approximation algorithm, called OneCoflow , for single coflow routing and scheduling. The multiple coflow problem is more challenging as coexisting coflows will compete for the same network resources, such as link bandwidth. To minimize the total weighted CCT, we derive an online multiple coflow routing and scheduling algorithm, called OMCoflow . We then derive a competitive ratio bound of our problem and prove that the competitive ratio of OMCoflow is nearly tight. To the best of our knowledge, this is the first online algorithm with theoretical performance guarantees which considers routing and scheduling simultaneously for multi-coflows. Compared with existing methods, OMCoflow runs more efficiently and avoids frequently rerouting the flows. Extensive simulations on a Facebook data trace show that OMCoflow outperforms the state-of-the-art heuristic schemes significantly (e.g., reducing the total weighted CCT by up to 41.8% and the execution time by up to 99.2% against RAPIER).
Article
Many edge computing systems rely on virtual machines (VMs) to deliver their services. It is challenging, however, to deploy the virtualization mechanisms on edge computing hardware infrastructures. In this paper, we introduce the engineering and research trends of achieving efficient VM management in edge computing. We elaborate on: 1) the virtualization frameworks for edge computing developed in both the industry and the academia; 2) the virtualization techniques tailored for edge computing; 3) the placement and scheduling algorithms optimized for edge computing; and 4) the research problems in security related to virtualization of edge computing.
Article
Mobile edge computing (MEC) has recently emerged as a promising technology to release the tension between computation-intensive applications and resource-limited mobile terminals (MTs). In this paper, we study the delay-optimal computation offloading in computation-constrained MEC systems. We consider the computation task queue at the MEC server due to its constrained computation capability. In this case, the task queue at the MT and that at the MEC server are strongly coupled in a cascade manner, which creates complex interdependencies and brings new technical challenges. We model the computation offloading problem as an infinite horizon average cost Markov decision process (MDP), and approximate it to a virtual continuous time system (VCTS) with reflections. Different to most of the existing works, we develop the dynamic instantaneous rate estimation for deriving the closed-form approximate priority functions in different scenarios. Based on the approximate priority functions, we propose a closed-form multi-level water-filling computation offloading solution to characterize the influence of not only the local queue state information (LQSI) but also the remote queue state information (RQSI). Furthermore, we discuss the extension of our proposed scheme to multi-MT multi-server scenarios. Finally, the simulation results show that the proposed scheme outperforms the conventional schemes.
Article
There is a growing interest in the development of in-network dispersed computing paradigms that leverage the computing capabilities of heterogeneous resources dispersed across the network for processing a massive amount of data collected at the edge of the network. We consider the problem of task scheduling for such networks, in a dynamic setting in which arriving computation jobs are modeled as chains, with nodes representing tasks, and edges representing precedence constraints among tasks. In our proposed model, motivated by significant communication costs in dispersed computing environments, the communication times are taken into account. More specifically, we consider a network where servers can serve all task types, and sending the outputs of processed tasks from one server to another server results in some communication delay. We first characterize the capacity region of the network, then propose a novel virtual queueing network encoding the state of the network. Finally, we propose a Max-Weight type scheduling policy, and considering the stochastic network in the fluid limit, we use a Lyapunov argument to show that the policy is throughput-optimal. Beyond the model of chains, we extend the scheduling problem to the model of the directed acyclic graph (DAG) which imposes a new challenge, namely logic dependency difficulty, requiring the data of processed parents tasks to be sent to the same server for processing the child task. We propose a virtual queueing network for DAG scheduling over broadcast networks, where servers always broadcast the data of processed tasks to other servers, and prove that Max-Weight policy is throughput-optimal.
Article
In this paper, we consider the problem of task offloading in a software-defined access network, where IoT devices are connected to fog computing nodes by multi-hop IoT access-points (APs). The proposed scheme considers the following aspects in a fog-computing-based IoT architecture: 1) optimal decision on local or remote task computation; 2) optimal fog node selection; and 3) optimal path selection for offloading. Accordingly, we formulate the multi-hop task offloading problem as an integer linear program (ILP). Since the feasible set is non-convex, we propose a greedy-heuristic-based approach to efficiently solve the problem. The greedy solution takes into account delay, energy consumption, multi-hop paths, and dynamic network conditions, such as link utilization and SDN rule-capacity. Experimental results show that the proposed scheme is capable of reducing the average delay and energy consumption by 12% and 21%, respectively, compared with the state of the art.
Article
Future 5G wireless networks aim to support highrate data communications and high-speed mobile computing. To achieve this goal, the mobile edge computing (MEC) and deviceto- device (D2D) communications have been recently developed, both of which take advantage of the proximity for better performance. In this paper, we integrate the D2D communications with MEC to further improve the computation capacity of the cellular networks where the task of each device can be offloaded to an edge node and a nearby D2D device. We aim to maximize the number of devices supported by the cellular networks with the constraints of both communication and computation resources. The optimization problem is formulated as a mixed integer nonlinear problem, which is not easy to solve in general. To tackle it, we decouple it into two subproblems. The first one minimizes the required edge computation resource for a given D2D pair while the second one maximizes the number of supported devices via optimal D2D pairing. We prove that the optimal solutions to the two subproblems compose the optimal solution to the original problem. Then, the optimal algorithm to the original problem is developed by solving two subproblems and some insightful results, such as the optimal transmit power allocation and the task offloading strategy, are also highlighted. Our proposal is finally tested by extensive numerical simulation results, which demonstrate that combining D2D communications with MEC can significantly enhance the computation capacity of the system.
Conference Paper
System virtualization (e.g., the virtual machine abstraction) has been established as the de facto standard form of isolation in multi-tenant clouds. More recently, unikernels have emerged as a way to reuse VM isolation while also being lightweight by eliminating the general purpose OS (e.g., Linux) from the VM. Instead, unikernels directly run the application (linked with a library OS) on the virtual hardware. In this paper, we show that unikernels do not actually require a virtual hardware abstraction, but can achieve similar levels of isolation when running as processes by leveraging existing kernel system call whitelisting mechanisms. Moreover, we show that running unikernels as processes reduces hardware requirements, enables the use of standard process debugging and management tooling, and improves the already impressive performance that unikernels exhibit.
Article
Fog Computing (FC) is an emerging paradigm that extends cloud computing toward the edge of the network. In particular, FC refers to a distributed computing infrastructure confined on a limited geographical area within which some Internet of Things applications/services run directly at the network edge on smart devices having computing, storage, and network connectivity, named fog nodes (FNs), with the goal of improving efficiency and reducing the amount of data that needs to be sent to the Cloud for massive data processing, analysis, and storage. This paper proposes an efficient strategy to offload computationally intensive tasks from end-user devices to FNs. The computation offload problem is formulated here as a matching game with externalities , with the aim of minimizing the worst case service time by taking into account both computational and communications costs. In particular, this paper proposes a strategy based on the deferred acceptance algorithm to achieve the efficient allocation in a distributed mode and ensuring stability over the matching outcome. The performance of the proposed method is evaluated by resorting to computer simulations in terms of worst total completion time, mean waiting, and mean total completion time per task. Moreover, with the aim of highlighting the advantages of the proposed method, performance comparisons with different alternatives are also presented and critically discussed. Finally, a fairness analysis of the proposed allocation strategy is also provided on the basis of the evaluation of the Jain’s index.
Article
Distributed cloud networking enables the deployment of a wide range of services in the form of interconnected software functions instantiated over general purpose hardware at multiple cloud locations distributed throughout the network. We consider the problem of optimal service delivery over a distributed cloud network, in which nodes are equipped with both communication and computation resources. We address the design of distributed online solutions that drive flow processing and routing decisions, along with the associated allocation of cloud and network resources. For a given set of services, each described by a chain of service functions, we characterize the cloud network capacity region and design a family of dynamic cloud network control (DCNC) algorithms that stabilize any service input rate inside the capacity region, while achieving arbitrarily close to minimum resource cost. The proposed DCNC algorithms are derived by extending Lyapunov drift-plus-penalty control to a novel multi-commodity-chain (MCC) queuing system, resulting in the first throughput and cost optimal algorithms for a general class of MCC flow problems that generalizes traditional multi-commodity flow by including flow chaining, flow scaling, and joint communication/computation resource allocation. We provide throughput and cost optimality guarantees, convergence time analysis, and extensive simulations in representative cloud network scenarios.
Article
Mobile edge computing is envisioned to be a promising paradigm to address the conflict between computationally intensive IoT applications and resource-constrained lightweight mobile devices. However, most existing research on mobile edge computation offloading has only taken the resource allocation between the mobile devices and the MEC servers into consideration, ignoring the huge computation resources in the centralized cloud computing center. To make full use of the centralized cloud and distributed MEC resources, designing a collaborative computation offloading mechanism becomes particularly important. Note that current MEC hosted networks, which mostly adopt the networking technology integrating cellular and core networks, face new challenges of single networking mode, long latency, poor reliability, high congestion, and high energy consumption. Hybrid fiber-wireless networks integrating both low-latency fiber optic and flexible wireless technologies should be a promising solution. Toward this end, we provide in this article a generic fiber-wireless architecture with coexistence of centralized cloud and distributed MEC for IoT connectivity. The problem of cloud-MEC collaborative computation offloading is defined, and a game-theoretic collaborative computation offloading scheme is proposed as our solution. Numerical results corroborate that our proposed scheme can achieve high energy efficiency and scales well as the number of mobile devices increases.
Article
Technological evolution of mobile user equipments (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. Suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud (CC). Nevertheless, this option introduces significant execution delay consisting in delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such delay is inconvenient and make the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: i) decision on computation offloading, ii) allocation of computing resource within the MEC, and iii) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.
Article
Mobile-edge computing (MEC) is an emerging paradigm to meet the ever-increasing computation demands from mobile applications. By offloading the computationally intensive workloads to the MEC server, the quality of computation experience, e.g., the execution latency, could be greatly improved. Nevertheless, as the on-device battery capacities are limited, computation would be interrupted when the battery energy runs out. To provide satisfactory computation performance as well as achieving green computing, it is of significant importance to seek renewable energy sources to power mobile devices via energy harvesting (EH) technologies. In this paper, we will investigate a green MEC system with EH devices and develop an effective computation offloading strategy. The execution cost, which addresses both the execution latency and task failure, is adopted as the performance metric. A low-complexity online algorithm, namely, the Lyapunov optimization-based dynamic computation offloading (LODCO) algorithm is proposed, which jointly decides the offloading decision, the CPU-cycle frequencies for mobile execution, and the transmit power for computation offloading. A unique advantage of this algorithm is that the decisions depend only on the instantaneous side information without requiring distribution information of the computation task request, the wireless channel, and EH processes. The implementation of the algorithm only requires to solve a deterministic problem in each time slot, for which the optimal solution can be obtained either in closed form or by bisection search. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Sample simulation results shall be presented to verify the theoretical analysis as well as validate the effectiveness of the proposed algorithm.
Article
The success of the Internet of Things and rich cloud services have helped create the need for edge computing, in which data processing occurs in part at the network edge, rather than completely in the cloud. Edge computing could address concerns such as latency, mobile devices' limited battery life, bandwidth costs, security, and privacy.
Article
With emerging demands for local area and popular content sharing services, multihop device-to-device communication is conceived as a vital component of next-generation cellular networks to improve spectral reuse, bring hop gains, and enhance system capacity. Ripening these benefits depends on fundamentally understanding its potential performance impacts and efficiently solving several main technical problems. Aiming to establish a new paradigm for the analysis and design of multihop D2D communications, in this article, we propose a dynamic graph optimization framework that enables the modeling of large-scale systems with multiple D2D pairs and node mobility patterns. By inherently modeling the main technological problems for multihop D2D communications, this framework benefits investigation of theoretical performance limits and studying the optimal system design. Furthermore, these achievable benefits are demonstrated by examples of simulations under a realistic multihop D2D communication underlaying cellular network.
Article
Tasks in modern data parallel clusters have highly diverse resource requirements, along CPU, memory, disk and network. Any of these resources may become bottlenecks and hence, the likelihood of wasting resources due to fragmentation is now larger. Today's schedulers do not explicitly reduce fragmentation. Worse, since they only allocate cores and memory, the resources that they ignore (disk and network) can be over-allocated leading to interference, failures and hogging of cores or memory that could have been used by other tasks. We present Tetris, a cluster scheduler that packs, i.e., matches multi-resource task requirements with resource availabilities of machines so as to increase cluster efficiency (makespan). Further, Tetris uses an analog of shortest-running-time-first to trade-off cluster efficiency for speeding up individual jobs. Tetris' packing heuristics seamlessly work alongside a large class of fairness policies. Trace-driven simulations and deployment of our prototype on a 250 node cluster shows median gains of 30% in job completion time while achieving nearly perfect fairness.
Article
We present a review of the problem of scheduled channel access in wireless networks with emphasis on ad hoc and sensor networks as opposed to WiFi, cellular, and infrastructure-based networks. After a brief introduction and problem definition, we examine in detail specific instances of the scheduling problem. These instances differ from each other in a number of ways, including the detailed network model and the objective function or performance criteria. They all share the “layerless” viewpoint that connects the access problem with the physical layer and, occasionally, with the routing layer. This review is intended to provide a reference point for the rich set of problems that arise in the allocation of resources in modern and future networks.
Article
In 1971, C. M. Fortuin, P. W. Kasteleyn and J. Ginibre [FKG] published a remarkable inequality relating certain real functions defined on a finite distributive lattice. This inequality, now generally known as the FKG inequality, arose in connection with these authors’ investigations into correlation properties of Ising ferromagnet spin systems and generalized earlier results of Griffiths [Gri] and Harris [Har] (who was studying percolation models). The FKG inequality in turn has stimulated further research in a number of directions, including a variety of interesting generalizations and applications, particularly to statistics, computer science and the theory of partially ordered sets. It turns out that special cases of the FKG inequality can be found in the literature of at least a half dozen different fields, and in some sense can be traced all the way back to work of Chebyshev.
Article
We consider the problem of approximating an integer program by first solving its relaxation linear program and then “rounding” the resulting solution. For several packing problems, we prove probabilistically that there exists an integer solution close to the optimum of the relaxation solution. We then develop a methodology for converting such a probabilistic existence proof to a deterministic approximation algorithm. The algorithm mimics the existence proof in a very strong sense.
Article
Linear-algebra rank is the solution to an especially tractable optimization problem. This tractability is viewed abstractly, and extended to certain more general optimization problems which are linear programs relative to certain derived polyhedra.
Chapter
This paper deals with results on performance measures of greedy type algorithms for maximization or minimization problems on general independence systems which were given by the authors independently in earlier papers ([3] and [6]). Besides a unified formulation of the earlier results some modifications and extensions are presented here underlining the central role which the greedy algorithm plays in combinatorial optimization.
Conference Paper
We develop a framework of distributed and stateless solutions for packing and covering linear programs, which are solved by multiple agents operating in a cooperative but uncoordinated manner. Our model has a separate "agent" controlling each variable and an agent is allowed to read-off the current values only of those constraints in which it has non-zero coefficients. This is a natural model for many distributed applications like flow control, maximum bipartite matching, and dominating sets. The most appealing feature of our algorithms is their simplicity and polylogarithmic convergence. For the packing LP max{cx | Ax i = exp[1ε * (Aix/bi -1)] for each constraint i and each agent j iteratively increases (resp. decreases) xj multiplicatively if AjT y is too small (resp. large) as compared to cj. Our algorithm starting from a feasible solution, always maintains feasibility, and computes a (1+epsilon) approximation in poly((ln (mn A_max))ε) rounds. Here m and n are number of rows and columns of A and A_max, also known as the "width" of the LP, is the ratio of maximum and minimum non-zero entries Aij/(bicj). Similar algorithm works for the covering LP min{by | AT y >= c, y >= 0} as well. While exponential dual variables are used in several packing/ covering LP algorithms before [25, 9, 13, 12, 26, 16], this is the first algorithm which is both stateless and has polylogarithmic convergence. Our algorithms can be thought of as applying distributed gradient descent/ascent on a carefully chosen potential. Our analysis differs from those of previous multiplicative update based algorithms and argues that while the current solution is far away from optimality, the potential function decreases/increases by a significant factor.
Conference Paper
ABSTRACT We consider a wireless network consisting of multiple transmitters with multicast traffic destined for a set of receivers. We are inter- ested in the problem of joint scheduling and rate control under two performance,objectives; the objective of maximizing,the total sum throughput of the network and of being proportionally fair with re- spect to the received rate at each receiver. We first consider,static wireless networks, and then extend our analysis for the more gen- eral and more realistic case of time-varying networks. We fin ally verify our analytical results through a set of simulations.
Article
Today's data centers offer IT services mostly hosted on dedicated physical servers. Server virtualization provides a technical means for server consolidation. Thus, multiple virtual servers can be hosted on a single server. Server consolidation describes the process of combining the workloads of several different servers on a set of target servers. We focus on server consolidation with dozens or hundreds of servers, which can be regularly found in enterprise data centers. Cost saving is among the key drivers for such projects. This paper presents decision models to optimally allocate source servers to physical target servers while considering real-world constraints. Our central model is proven to be an NP-hard problem. Therefore, besides an exact solution method, a heuristic is presented to address large-scale server consolidation projects. In addition, a preprocessing method for server load data is introduced allowing for the consideration of quality-of-service levels. Extensive experiments were conducted based on a large set of server load data from a data center provider focusing on managerial concerns over what types of problems can be solved. Results show that, on average, server savings of 31 percent can be achieved only by taking cycles in the server workload into account.
Conference Paper
The max-flow min-cut theorem of Ford and Fulkerson is based on an even more foundational result, namely Menger's theorem on graph connectivity Menger's theorem provides a good characterization for the following single-source disjoint paths problem: given a graph G, with a source vertex s and terminals t<sub>1</sub>,...,t<sub>k</sub>, decide whether there exist edge-disjoint s-t<sub>i</sub> paths for i=1,...,k. We consider a natural, NP-hard generalization of this problem, which we call the single-source unsplittable flow problem. We are given a source and terminals as before; but now each terminal t<sub>i</sub> has a demand p<sub>i</sub>&les;1, and each edge e of G has a capacity c<sub>e </sub>&ges;1. The problem is to decide whether one can choose a single s-t<sub>i</sub> path for each i, so that the resulting set of paths respects the capacity constraints-the total amount of demand routed across any edge e must be bounded by the capacity c<sub>e</sub>. The main results of this paper are constant-factor approximation algorithms for three natural optimization versions of this problem, in arbitrary directed and undirected graphs. The development of these algorithms requires a number of new techniques for rounding fractional solutions to network flow problems; for two of the three problems we consider, there were no previous techniques capable of providing an approximation in the general case, and for the third, the randomized rounding algorithm of Raghavan and Thompson provides a logarithmic approximation. Our techniques are also of interest from the perspective of a family of NP-hard load balancing and machine scheduling problems that can be reduced to the single-source unsplittable flow problem
FlashLinQ: A synchronous distributed scheduler for peer-to-peer ad hoc networks
  • X Wu
Solving packing integer programs via randomized rounding with alterations
  • N Bansal
  • N Korula
  • V Nagarajan
  • A Srinivasan
Fast upcasting on matroid problems
  • D Peleg