Figure 7 - uploaded by Ye Xia
Content may be subject to copyright.
Execution Times with Growing Applications.

Execution Times with Growing Applications.

Source publication
Conference Paper
Full-text available
As fog computing brings compute and storage resources to the edge of the network, there is an increasing need for automated placement (i.e., selection of hosting devices) to deploy distributed applications. Such a placement must conform to applications' resource requirements in a heterogeneous fog infrastructure. The placement decision-making is fu...

Similar publications

Conference Paper
Full-text available
Low power wide area networks (LPWAN) are widely used in IoT applications as they offer low power consumption and long-range communication. LoRaWAN and SigFox have taken the top positions in the unlicensed ISM bands, while LTE-M and NB-IoT have emerged within cellular networks. We focus on unlicensed bands operation because of their availability for...
Article
Full-text available
This paper aims to characterize the profile of the prospective IoT market in Indonesia. The primary data were collected in July 2018 through a comprehensive survey that sampled respondents representing the whole Indonesian population. The questionnaire was developed by extracting the 4 (four) main issues regarding which the potential users of IoT t...
Conference Paper
Full-text available
Fog application design is complex as it comprises not only the application architecture, but also the runtime infrastructure, and the deployment mapping from application modules to infrastructure machines. For each of these aspects, there is a variety of design options that all affect quality of service and cost of the resulting application. In thi...
Article
With the increasing number of IoT devices, fog computing has emerged, providing processing resources at the edge for the tremendous amount of sensed data and IoT computation. The advantage of the fog gets eliminated if it is not present near IoT devices. Fogs nowadays are pre-configured in specific locations with pre-defined services, which limit t...
Conference Paper
Full-text available
It is difficult to overstate how large a role Intelligent Transportation Systems (ITS) technology has played in advancing safety, mobility, and productivity in our daily lives. ITS encompasses a broad range of technologies, including information and communication technologies, transportation and communication infrastructures, connected vehicles, an...

Citations

... Benamer et al. [29] propose a latency-aware task placement heuristic mechanism that also tries to minimize the inter-node traffic by considering the placement of precedent tasks. Xia et al. [30] propose scalable placement algorithms and heuristic mechanisms to minimize response times of IoT applications. Another common method of optimizing task placement is to group tasks that communicate and assign them to the same edge node or to near by nodes [31,32], thus considering only the communication latency and not the execution one, as we do in our problem formulation. ...
Article
Full-text available
Edge analytics receives an ever-increasing interest since processing streaming data closer to where they are produced, rather than transferring them to the cloud, ensures lower latency while also addresses data privacy issues. In this work, we deal with the placement of analytic tasks to heterogeneous geo-distributed edge devices while targeting three objectives, namely latency, quality of results, and resource utilization. In addition, we investigate this multi-objective problem in a multi-query setting, where we jointly optimize multiple analytic jobs while dynamically adjusting task placement decisions. We explore multiple solutions that we thoroughly evaluate; interestingly, in a multi-query setting, all three objectives can be improved simultaneously by our proposals in many cases. Furthermore, we develop a proof-of-concept prototype using Apache Storm. Our solutions are thoroughly evaluated and shown to yield improvements by more than 50% compared to advanced baselines targeting only latency. Moreover, our software prototype managed to achieve speedups of up to 6×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times$$\end{document} over the Resource Aware Apache Storm scheduler, with an average speedup of 2.76×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times$$\end{document}, when deployed over a small-scale infrastructure.
... Each service requires some required resources, where each computing unit has some available resources. According to References 27,28,30, resources are considered as blocks. Therefore, each service consumes part of the resource blocks in the computing unit. ...
Article
Full-text available
Nowadays, fog computing has joined cloud computing as an emerging computing paradigm to provide resources at the network edge. Fog servers process data associated with Internet of Things (IoT) devices independently of cloud computing, thus saving bandwidth, resource reservations, and storage for real‐time applications with lower latency. Besides, cloud computing supports the integration of edge and cloud resources and facilitates the placement of IoT applications at the network edge. Recent researchers focus on how to deploy IoT services as components of IoT applications on fog computing units, where the loss of resources, energy, and bandwidth are minimized. This problem, known as the IoT service placement problem (SPP), is NP‐hard, and meta‐heuristic models are popular to address it. Each IoT service has its own requirements in terms of latency sensitivity, processor, memory, and storage. Meanwhile, fog computing units are heterogeneous and have limited resource capacities. Therefore, SPP should be addressed by considering the features of fog environment, tolerable delay, and network bandwidth. We formulate SPP as a multi‐objective optimization problem with the perspective of throughput, service cost, resource utilization, energy consumption, and service latency. To solve this problem, the learner performance‐based behavior (LPB) algorithm is presented as a meta‐heuristic model that originates from the MAPE‐K autonomous planning model. The proposed approach, LPB‐SPP, considers resource consumption distribution and service deployment prioritization, and also uses the concepts of elitism and balanced resource consumption to improve the placement process. The validation of LPB‐SPP has been done using different performance metrics and the results have been compared against state‐of‐the‐art algorithms. Simulations show that LPB‐SPP performs better in most comparisons.
... This data can overwhelm the network in the centralized cloud paradigm, potentially causing latency-and bandwidth-related issues [3]. To cope with such issues, fog computing proposes the utilization of both cloud and edge compute nodes [4], [5]. This may hinder the accumulation of data in one central location, reduce the communication latency, and improve bandwidth utilization because the computations of the IoT data can also be performed close to the data sources [6]. ...
... The IoT devices are located at the bottom of the hierarchy, and physically close to the edge compute nodes [32]. These devices integrate sensors and/or actuators in order to sense and/or interact with the surrounding environment [5]. The IoT devices are usually resource-constrained, and may not integrate enough computational resources to implement the necessary communication protocols for interacting with the compute nodes directly (e.g., using an application layer protocol such as HTTP) [33]. ...
Article
Full-text available
Fog computing enables the execution of IoT applications on compute nodes which reside both in the cloud and at the edge of the network. To achieve this, most fog computing systems route the IoT data on a path which starts at the data source, and goes through various edge and cloud nodes. Each node on this path may accept the data if there are available resources to process this data locally. Otherwise, the data is forwarded to the next node on path. Notably, when the data is forwarded (rather than accepted), the communication latency increases by the delay to reach the next node. To avoid this, we propose a routing mechanism which maintains a history of all nodes that have accepted data of each context in the past. By processing this history, our mechanism sends the data directly to the closest node that tends to accept data of the same context. This lowers the forwarding by nodes on path, and can reduce the communication latency. We evaluate this approach using both prototype- and simulation-based experiments which show reduced communication latency (by up to 23%) and lower number of hops traveled (by up to 73%), compared to a state-of-the-art method.
... Cloud-fog computing architectures for Internet of things (IoT) applications also aim for the allocation of distributed processes or services to computing hardware while trying to optimize some criteria [11], [12], [13], [14], [15]. However, no evaluation of resilience performance is performed after deployment. ...
Preprint
Full-text available
Executing distributed cyber-physical software processes on edge devices that maintains the resiliency of the overall system while adhering to resource constraints is quite a challenging trade-off to consider for developers. Current approaches do not solve this problem of deploying software components to devices in a way that satisfies different resilience requirements that can be encoded by developers at design time. This paper introduces a resilient deployment framework that can achieve that by accepting user-defined constraints to optimize redundancy or cost for a given application deployment. Experiments with a microgrid energy management application developed using a decentralized software platform show that the deployment configuration can play an important role in enhancing the resilience capabilities of distributed applications as well as reducing the resource demands on individual nodes even without modifying the control logic.
... Of the 52 papers that considered end-to-end latency, 4 also considered network latency and computation time separately, so in the end 23 papers considered not-end-to-end latency. Some papers, such as [105], combined different (partial) latencies into one metric using their weighted sum. ...
... Xia et al. [105] carried out a simulation in SimGrid. They developed a scaling approach for the test infrastructure and the test application, which consists of three phases and generates problem instances with up to approximately 10,000 fog nodes and 4000 components. ...
Article
Full-text available
Recently, the concept of cloud computing has been extended towards the network edge. Devices near the network edge, called fog nodes, offer computing capabilities with low latency to nearby end devices. In the resulting fog computing paradigm (also called edge computing), application components can be deployed to a distributed infrastructure, comprising both cloud data centers and fog nodes. The decision which infrastructure nodes should host which application components has a large impact on important system parameters like performance and energy consumption. Several algorithms have been proposed to find a good placement of applications on a fog infrastructure. In most cases, the proposed algorithms were evaluated experimentally by the respective authors. In the absence of a theoretical analysis, a thorough and systematic empirical evaluation is of key importance for being able to make sound conclusions about the suitability of the algorithms. The aim of this paper is to survey how application placement algorithms for fog computing are evaluated in the literature. In particular, we identify good and bad practices that should be utilized respectively avoided when evaluating such algorithms.
... Task graph The simplest model for application partitioning is the task graph, usually represented as a directed acyclic graph (DAG), which is a widely used construction in cloud and fog computing for describing dependencies between components in complex distributed applications. In [47][48][49][50], the task-graphbased models were applied to model applications, which denote the dependency between different sub-tasks and the automated partitioning strategies for generating an optimal offloading. Furthermore, DAG can contain other relevant information in its vertices such as the number of necessary CPU cycles and the amount of required memory, as well as in its edges such as representing the amount of I/O data as edge weights [51]. ...
... A number of challenges regarding the problem of service placement in fog computing arise [49]: ...
... The problem of resource allocation and service placement in a fog system entails optimization of one or more specified metrics, the values of which need to be either minimized or maximized depending on a metric's contribution to the system performance [49]. This subsection discusses the most widely considered optimization metrics in fog systems. ...
Article
Full-text available
In recent years, fog computing has emerged as a computing paradigm to support the computationally intensive and latency-critical applications for resource limited Internet of Things (IoT) devices. The main feature of fog computing is to push computation, networking, and storage facilities closer to the network edge. This enables IoT user equipment (UE) to profit from the fog computing paradigm by mainly offloading their intensive computation tasks to fog resources. Thus, computation offloading and service placement mechanisms can overcome the resource constraints of IoT devices, and improve the system performance in terms of increasing battery lifetime of UE and reducing the total delay. In this paper, we survey the current research conducted on computation offloading and service placement in fog computing-based IoT in a comparative manner.
... The IoT devices are located at the bottom of the hierarchy, and physically close to the edge compute nodes [104]. These devices integrate sensors and/or actuators in order to sense and/or interact with the surrounding environment [237]. The IoT devices are usually resource-constrained, and may not integrate enough computational resources to implement the necessary communication protocols for interacting with the compute nodes directly (e.g., using an application layer protocol such as HTTP) [106]. ...
Thesis
Full-text available
Fog computing is a novel computing paradigm which enables the execution of applications on compute nodes which reside both in the cloud and at the edge of the network. Various performance benefits, such as low communication latency and high network bandwidth, have turned this paradigm into a well-accepted extension of cloud computing. So far, many fog computing systems have been proposed, consisting of distributed compute nodes which are often organized hierarchically in layers. Such systems commonly rely on the assumption that the nodes of adjacent layers reside close to each other, thereby achieving low latency computations. However, this assumption may not hold in fog computing systems that span over large geographical areas, due to the wide distribution of the nodes. In addition, most proposed fog computing systems route the data on a path which starts at the data source, and goes through various edge and cloud nodes. Each node on this path may accept the data if there are available resources to process this data locally. Otherwise, the data is forwarded to the next node on path. Notably, when the data is forwarded (rather than accepted), the communication latency increases by the delay to reach the next node. This thesis aims at tackling these problems by proposing distributed algorithms whereby the compute nodes measure the network proximity to each other, and self-organize accordingly. These algorithms are implemented on geographically distributed compute nodes, considering image processing and smart city use cases, and are thoroughly evaluated showing significant latency- and bandwidth-related performance benefits. Furthermore, we analyze the communication latency of sending data to distributed edge and cloud compute nodes, and we propose two novel routing approaches: i) A context-aware routing mechanism which maintains a history of previous transmissions, and uses this history to find nearby nodes with available resources. ii) edgeRouting, which leverages the high bandwidth between nodes of cloud providers in order to select network paths with low communication latency. Both of these mechanisms are evaluated under real-world settings, and are shown to be able to lower the communication latency of fog computing systems significantly, compared to alternative methods.
... The NP-hardness of problems similar to MCECD was often claimed in the literature [16], [17], [18], but seldom proven. Even if similar problems are NP-hard, this does not imply NP-hardness of MCECD. ...
... The literature contains different interpretations of latency. Some authors define latency as the total delay on a path or cycle of the application graph [24]; others define latency constraints for individual connectors [18], [25], [26]. Our approach, as described so far, belongs to this second category. ...
Article
An edge data center can host applications that require low-latency access to nearby end devices. If the resource requirements of the applications exceed the capacity of the edge data center, some non-latency-critical application components may be offloaded to the cloud. Such offloading may incur financial costs both for the use of cloud resources and for data transfer between the edge data center and the cloud. Moreover, such offloading may violate data protection requirements if components process sensitive data. The operator of the edge data center has to decide which components to keep in the edge data center and which ones to offload to the cloud. In this paper, we formalize this problem and prove that it is strongly NP-hard. We introduce an optimization algorithm that is fast enough to be run online for dynamic and automatic offloading decisions, guarantees that the solution satisfies hard constraints regarding latency, data protection, and capacity, and achieves near-optimal costs. We also show how the algorithm can be extended to handle multiple edge data centers. Experiments show that the cost of the solution found by our algorithm is on average only 2.7% higher than the optimum.
... Subsequently, IoT requested services are hosted on the allocated edge nodes guaranteeing the desired response time. Another work with the same objective as the ones above [28], [29], is proposed by Xia et al. [30]. Based on a backtrack search algorithm and accompanied heuristics, the proposed mechanism makes placement decisions that fit the objective. ...
... 30: Sort ∆ in the order of O c,δ from low to high; ...
Article
Full-text available
The Internet of Things (IoT) requires a new processing paradigm that inherits the scalability of the cloud while minimizing network latency using resources closer to the network edge. On the one hand, building up such flexibility within the edge-to-cloud continuum consisting of a distributed networked ecosystem of heterogeneous computing resources is challenging. On the other hand, IoT traffic dynamics and the rising demand for low-latency services foster the need for minimizing the response time and a balanced service placement. Load-balancing for fog computing becomes a cornerstone for cost-effective system management and operations. This paper studies two optimization objectives and formulates a decentralized load-balancing problem for IoT service placement: (global) IoT workload balance and (local) quality of service (QoS), in terms of minimizing the cost of deadline violation, service deployment, and unhosted services. The proposed solution, EPOS Fog, introduces a decentralized multi-agent system for collective learning that utilizes edge-to-cloud nodes to jointly balance the input workload across the network and minimize the costs involved in service execution. The agents locally generate possible assignments of requests to resources and then cooperatively select an assignment such that their combination maximizes edge utilization while minimizes service execution cost. Extensive experimental evaluation with realistic Google cluster workloads on various networks demonstrates the superior performance of EPOS Fog in terms of workload balance and QoS, compared to approaches such as First Fit and exclusively Cloud-based. The results confirm that EPOS Fog reduces service execution delay up to 25% and the load-balance of network nodes up to 90%. The findings also demonstrate how distributed computational resources on the edge can be utilized more cost-effectively by harvesting collective intelligence.
... Another FSPP is formulated and examined in [17] to place software components of distributed IoT applications onto a set of fog nodes to minimize the average response time of applications. In the model, the low delay of fog-to-fog communication is exploited for collaborative task offloading in order to reduce the total response time of application provisioning. ...
Article
This paper introduces FRATO (Fog Resource aware Adaptive Task Offloading) - a framework for the IoT-fog-cloud systems to offer the minimal service provisioning delay through an adaptive task offloading mechanism. Fundamentally, FRATO is based on the fog resource to select flexibly the optimal offloading policy, which in particular includes a collaborative task offloading solution based on the data fragment concept. In addition, two distributed fog resource allocation algorithms, namely TPRA and MaxRU are developed to deploy the optimized offloading solutions efficiently in cases of resource competition. Through the extensive simulation analysis, the FRATO-based service provisioning approaches show potential advantages in reducing the average delay significantly in the systems with high rate of service requests and heterogeneous fog environment compared with the existing solutions.