Figure 8 - uploaded by Sumit Maheshwari
Content may be subject to copyright.
Average Response Time Comparison for Core Cloud and Edge Only System, with Different Load and Inter-edge Bandwidth for Baseline

Average Response Time Comparison for Core Cloud and Edge Only System, with Different Load and Inter-edge Bandwidth for Baseline

Source publication
Conference Paper
Full-text available
This paper presents an analysis of the scalability and performance of an edge cloud system designed to support latency-sensitive applications. A system model for geographically dispersed edge clouds is developed by considering an urban area such as Chicago and co-locating edge computing clusters with known Wi-Fi access point locations. The model al...

Contexts in source publication

Context 1
... a bandwidth-constrained cloud system cannot compete with an edge-only system in terms of response time, further discussions in this paper will assume a bandwidth-unconstrained cloud. Figure 8(a) plots the average response time for the core cloud as well as edge only system with different inter-edge bandwidth. On one hand, the extreme fronthaul bandwidth of 100 Gbps edge-only compares with the unconstrained bandwidth edge-only system and therefore all the edge resources are utilized. ...
Context 2
... as there are still significant queuing delays for a loaded edge at an AP (or neighboring AP). After a load point, there is no dip in response time irrespective of how good the fronthaul connectivity between edges is. In this case, there is a crossover around Load=7 so we compare the CDF of core cloud only and edge-only with the 1 Gbps case in Fig. 8(b). A linear rise in response time can be observed for the static load case implying that the inter-edge bandwidth of 1 Gbps is insufficient to run such a heavily loaded ...
Context 3
... Edge-favored vs. Cloud-favored: Figures 18(a) and (b) compare edge and cloud favored resources respectively when inter-edge bandwidth is 10 Gbps. Figure 18(a) shows that for an edge-favored case when most of the resources are available at the edge, a baseline neighbor selection scheme performs equally well as ECON which selects the best of all edge resources for the request. For the cloud favored resource case shown in Fig. 18(b), ECON performs better than baseline as each of the edges has sufficient bandwidth to reach a far away available edge resource. ...
Context 4
... favored resources respectively when inter-edge bandwidth is 10 Gbps. Figure 18(a) shows that for an edge-favored case when most of the resources are available at the edge, a baseline neighbor selection scheme performs equally well as ECON which selects the best of all edge resources for the request. For the cloud favored resource case shown in Fig. 18(b), ECON performs better than baseline as each of the edges has sufficient bandwidth to reach a far away available edge resource. Therefore, when sufficient bandwidth is available, it is better to choose an edge even if there are fewer resources available as the queuing time at an edge can be compensated by faster request transfers. On ...
Context 5
... Goodput: As discussed earlier, AR applications are de- lay sensitive and discard packets which arrive late. Goodput is defined as the number of useful (on time) bits per second Figure 18. ECON and Baseline Comparison for Edge and Cloud Favored Resources (Inter-edge BW=10 Gbps) Figure 19. ...

Citations

... In terms of tools and software solutions, different opensource simulators have been proposed in recent years to simulate IoT environments, such as iFog-Sim [24], IoTSim [25] and EdgeCloudSim [26]. Several research works have made use of simulators to test the behavior of specific IoT applications on edge-cloud architectures [27]- [30]. Unlike these, our work analyzes how a large-scale edge-cloud architecture can be leveraged to efficiently manage urban mobility applications based on machine learning. ...
Article
Full-text available
In recent years, there has been an increase in the use of edge-cloud continuum solutions to efficiently collect and analyze data generated by IoT devices. In this paper, we investigate to what extent these solutions can manage tasks related to urban mobility, by combining real-time and low latency analysis offered by the edge with large computing and storage resources provided by the cloud. Our proposal is organized into three parts. The first part focuses on defining three application scenarios in which geotagged data generated by IoT objects, such as taxis, cars, and smartphones, are collected and analyzed through machine learning-based algorithms (i.e., next location prediction, location-based advertising, and points of interest recommendation). The second part is dedicated to modeling an edge-cloud continuum architecture capable of managing a large number of IoT devices and executing machine learning algorithms to analyze the data they generate. The third part analyzes the experimental results in which different design choices were evaluated, such as the number of devices and orchestration policies, to improve the performance of machine learning algorithms in terms of processing time, network delay, task failure, and computational resource utilization. The results highlight the potential benefits of edge and cloud cooperation in the three application scenarios, demonstrating that it significantly improves resource utilization and reduces the task failure rate compared to other widely adopted architectures, such as edge- or cloud-only architectures.
... We consider three scenarios: (i) only core cloud, (ii) only cloud-edge system, and (iii) both core and cloud-edge schemes, with the same number of resources in each scenario. As shown in Table 4, the system parameters used in the simulation are selected based on previous experiments that evaluated the scalability and performance of edge cloud systems [25]. Figure 8 depicts the typical response times for edge-only and cloud-only systems with different loads and no limits on edge-cloud bandwidths. ...
... the same number of resources in each scenario. As shown in Table 4, the system parameters used in the simulation are selected based on previous experiments that evaluated the scalability and performance of edge cloud systems [25]. Figure 8 depicts the typical response times for edge-only and cloud-only systems with different loads and no limits on edge-cloud bandwidths. ...
Article
Full-text available
This study proposes and develops a secured edge-assisted deep learning (DL)-based automatic COVID-19 detection framework that utilizes the cloud and edge computing assistance as a service with a 5G network and blockchain technologies. The development of artificial intelligence methods through services at the edge plays a significant role in serving many applications in different domains. Recently, some DL approaches have been proposed to successfully detect COVID-19 by analyzing chest X-ray (CXR) images in the cloud and edge computing environments. However, the existing DL methods leverage only local and small training datasets. To overcome these limitations, we employed the edges to perform three tasks. The first task was to collect data from different hospitals and send them to a global cloud to train a DL model on massive datasets. The second task was to integrate all the trained models on the cloud to detect COVID-19 cases automatically. The third task was to retrain the trained model on specific COVID-19 data locally at hospitals to improve and generalize the trained model. A feature-level fusion and reduction were adopted for model performance enhancement. Experimental results on a public CXR dataset demonstrated an improvement against recent related work, achieving the quality-of-service requirements.
... The enormous number of devices joining the continuum irrevocably implies that vast amounts of data are generated at the edge, increasing the challenges on the data processing layer and creating bottlenecks, especially in the execution of heavy ML-based tasks. Vertical offloading patterns, commonly used in cloud-edge and centralized learning approaches, heavily rely on low network latency and considerable bandwidth capacity [36,37]. This unfavorable dependency can be addressed by adopting a peer-to-peer offloading strategy instead of a hierarchical architecture. ...
Article
Full-text available
Future data-intensive intelligent applications are required to traverse across the cloud-to-edge-to-IoT continuum, where cloud and edge resources elegantly coordinate, alongside sensor networks and data. However, current technical solutions can only partially handle the data outburst associated with the IoT proliferation experienced in recent years, mainly due to their hierarchical architectures. In this context, this paper presents a reference architecture of a meta-operating system (RAMOS), targeted to enable a dynamic, distributed and trusted continuum which will be capable of facilitating the next-generation smart applications at the edge. RAMOS is domain-agnostic, capable of supporting heterogeneous devices in various network environments. Furthermore, the proposed architecture possesses the ability to place the data at the origin in a secure and trusted manner. Based on a layered structure, the building blocks of RAMOS are thoroughly described, and the interconnection and coordination between them is fully presented. Furthermore, illustration of how the proposed reference architecture and its characteristics could fit in potential key industrial and societal applications, which in the future will require more power at the edge, is provided in five practical scenarios, focusing on the distributed intelligence and privacy preservation principles promoted by RAMOS, as well as the concept of environmental footprint minimization. Finally, the business potential of an open edge ecosystem and the societal impacts of climate net neutrality are also illustrated.
... Based on the evaluation criteria, a controller can be shown to be better than other controllers. In [7] developed a framework for modeling and analyzing the capacity of a city-scale hybrid edge cloud system designed to serve latency-sensitive applications with service time limits by considering a metropolitan region like Chicago and co-locating edge computing clusters with known Wi-Fi access point locations, a system model for geographically scattered edge clouds is constructed. In both edge and cloud computing, the approach allowed for the delivery of network bandwidth and processing resources with specified specifications. ...
Conference Paper
Full-text available
Cloud computing and cloud testing are vast fields that have attracted significant attention recently. In addition, the need to find an approach for measuring cloud-based applications' effectiveness has also increased. In this work, we introduced an approach to testing the performance of the cloud software services on the Amazon cloud. We used two cloud-based applications hosted in the Amazon cloud to demonstrate the approach depending on five technical performance metrics. We applied the testing methodology using a JMeter test script. The two selected applications represent two different taxonomies: 2-tier and 3-tier architectures. Following the testing process, we found that the WordPress application (i.e., 3-tier architecture) performs better than Ghost and is more stable in terms of the selected performance metrics. Practitioners would benefit from this study by a better understanding of the assessment and testing of n-tier Cloud-Based Software Services using technical arguments.
... Performance efficiency is the most commonly studied quality attribute, some of the indicators to quantify performance can be response and reaction time, worst case and average execution time, throughput, CPU, memory, and network utilization, performance under different loads, and energy consumption [37,38,39,40]. Similarly, scalability can be achieved by deploying more nodes, however, node management, SW parallelism, load balancing, devices orchestration, etc., can be a complex process [41,42,43,44]. The standardization of interfaces, protocols, and APIs used in the EC will improve interoperability [45,46]. ...
... Maheshwari et al [41] used a simulation model to obtain system capacity and response time for an augmented reality application using Microsoft Hololens deployed in a city-scale general multi-tier network containing both edge and central cloud servers. The model analyzed the impact of key parameters resource distribution and front-haul/back-haul bandwidth for different system loads, edge-cloud resource distribution, inter-edge bandwidth, and edge-core bandwidth parameters. ...
Preprint
Full-text available
p>The proliferation of the Internet of Things (IoT) devices and advances in their computing capabilities give an impetus to the Edge Computing (EC) paradigm that can facilitate localize computing and data storage. As a result, limitations like network connectivity issues, data mobility constraints, and real-time processing delays, in Cloud computing can be addressed more efficiently. EC can create a lot of opportunities across the breadth of the IT domains and cyber-physical systems. Several studies have been conducted describing EC general requirements, challenges, and issues. However, considering the complexity involved in the EC paradigm, non-functional requirements (NFRs) are equally important as functional requirements, to be thoroughly investigated. This paper discusses NFRs, namely, performance, reliability, scalability, and security that can assist in maturing the EC paradigm. To accomplish the objective, available case studies and the state-of-the-art related to non-functional requirements, real-world issues, and challenges concerning EC are reviewed. Ultimately, the paper anatomizes the aforementioned NFRs leveraging the six-part scenario form of source-stimulus-artifact-environment-response-response measure to assert Quality of Service (QoS) in EC.</p
... Performance efficiency is the most commonly studied quality attribute, some of the indicators to quantify performance can be response and reaction time, worst case and average execution time, throughput, CPU, memory, and network utilization, performance under different loads, and energy consumption [37,38,39,40]. Similarly, scalability can be achieved by deploying more nodes, however, node management, SW parallelism, load balancing, devices orchestration, etc., can be a complex process [41,42,43,44]. The standardization of interfaces, protocols, and APIs used in the EC will improve interoperability [45,46]. ...
... Maheshwari et al [41] used a simulation model to obtain system capacity and response time for an augmented reality application using Microsoft Hololens deployed in a city-scale general multi-tier network containing both edge and central cloud servers. The model analyzed the impact of key parameters resource distribution and front-haul/back-haul bandwidth for different system loads, edge-cloud resource distribution, inter-edge bandwidth, and edge-core bandwidth parameters. ...
Preprint
Full-text available
p>The proliferation of the Internet of Things (IoT) devices and advances in their computing capabilities give an impetus to the Edge Computing (EC) paradigm that can facilitate localize computing and data storage. As a result, limitations like network connectivity issues, data mobility constraints, and real-time processing delays, in Cloud computing can be addressed more efficiently. EC can create a lot of opportunities across the breadth of the IT domains and cyber-physical systems. Several studies have been conducted describing EC general requirements, challenges, and issues. However, considering the complexity involved in the EC paradigm, non-functional requirements (NFRs) are equally important as functional requirements, to be thoroughly investigated. This paper discusses NFRs, namely, performance, reliability, scalability, and security that can assist in maturing the EC paradigm. To accomplish the objective, available case studies and the state-of-the-art related to non-functional requirements, real-world issues, and challenges concerning EC are reviewed. Ultimately, the paper anatomizes the aforementioned NFRs leveraging the six-part scenario form of source-stimulus-artifact-environment-response-response measure to assert Quality of Service (QoS) in EC.</p
... With the advent of cloud computing, several approaches studied hosting HAR models on the cloud (Chun et al. 2011;Gravina et al. 2017) to predict activities sent from user devices. Nonetheless, performing the activity prediction on the cloud may encounter latency caused by network issues or due to increasing loads during peak times (Maheshwari et al. 2018). Moreover, the service may be interrupted due to network connection problems. ...
Article
Full-text available
Human activity recognition is a thriving field with many applications in several domains. It relies on well-trained artificial intelligence models to provide accurate real-time predictions of various human movements and activities. Human activity recognition utilizes various types of sensors such as video cameras, fixed motion sensors, and those found in personal smart edge devices such as accelerometers and gyroscopes. The latter sensors capture motion as time-series data, following a specific pattern for each movement. However, movements for some users may vary from these patterns, which limit the efficacy of using a generic model. This paper proposes a human activity recognition architecture that utilizes deep learning models using time-series data. It applies incremental learning for building personalized models derived from a well-trained model. The architecture uses edge devices for model prediction and the cloud for model training. Performing the prediction on edge devices reduces the network overhead as well as the load on the cloud. We tested our approach on a publicly available dataset containing samples for daily living activities and fall states. The results show that building a personalized model from a well-trained model significantly improves the prediction accuracy. Moreover, deploying a light version of the model on edge devices maintains prediction accuracy and provides comparable response times to the original model on the cloud.
... Performance efficiency is the most commonly studied quality attribute, some of the indicators to quantify performance can be response and reaction time, worst case and average execution time, throughput, CPU, memory, and network utilization, performance under different loads, and energy consumption [37,38,39,40]. Similarly, scalability can be achieved by deploying more nodes, however, node management, SW parallelism, load balancing, devices orchestration, etc., can be a complex process [41,42,43,44]. The standardization of interfaces, protocols, and APIs used in the EC will improve interoperability [45,46]. ...
... Maheshwari et al [41] used a simulation model to obtain system capacity and response time for an augmented reality application using Microsoft Hololens deployed in a city-scale general multi-tier network containing both edge and central cloud servers. The model analyzed the impact of key parameters resource distribution and front-haul/back-haul bandwidth for different system loads, edge-cloud resource distribution, inter-edge bandwidth, and edge-core bandwidth parameters. ...
Article
Full-text available
The proliferation of the Internet of Things (IoT) devices and advances in their computing capabilities give an impetus to the Edge Computing (EC) paradigm that can facilitate localize computing and data storage. As a result, limitations like network connectivity issues, data mobility constraints, and real-time processing delays, in Cloud computing can be addressed more efficiently. EC can create a lot of opportunities across the breadth of the IT domains and cyber–physical systems. Several studies have been conducted describing EC general requirements, challenges, and issues. However, considering the complexity involved in the EC paradigm, non-functional requirements (NFRs) are equally important as functional requirements, to be thoroughly investigated. This paper discusses NFRs, namely, performance, reliability, scalability, and security that can assist in maturing the EC paradigm. To accomplish the objective, available case studies and the state-of-the-art related to non-functional requirements, real-world issues, and challenges concerning EC are reviewed. Ultimately, the paper anatomizes the aforementioned NFRs leveraging the six-part scenario form of source-stimulus-artifact-environment-response-response measure to assert Quality of Service (QoS) in EC.
... For example, for the FIWARE platform, a cloud-based testbed is created to generate the load of protocols and emulate largescale deployments of devices that send data [22], taking into account the cloud-based deployment, the performance observability, the massive load generation, and adherence to standards. The ability to support latency-sensitive applications of an edge cloud system was also analyzed in [23] from scalability and performance points of view. Li et al. used performance testing to evaluate their proposed replica creation algorithm based on the Grey-Markov chain model [24]. ...
Article
Full-text available
The use of monitoring systems based on cloud computing has become common for smart buildings. However, the dilemma of centralization versus decentralization, in terms of gathering information and making the right decisions based on it, remains. Performance, dependent on the system design, does matter for emergency detection, where response time and loading behavior become very important. We studied several design options based on edge computing and containers for a smart building monitoring system that sends alerts to the responsible personnel when necessary. The study evaluated performance, including a qualitative analysis and load testing, for our experimental settings. From 700+ edge nodes, we obtained response times that were 30% lower for the public cloud versus the local solution. For up to 100 edge nodes, the values were better for the latter, and in between, they were rather similar. Based on an interpretation of the results, we developed recommendations for five real-world configurations, and we present the design choices adopted in our development for a complex of smart buildings.
... Therefore, we assume that the communication latency can be reduced by decreasing the number of hops between nodes, and increasing the network bandwidth. This has also been confirmed in the literature [146]. Further evidence to support this decision (i.e., using hops as an indicator of network proximity when creating fog computing systems), is provided by our results (in Section 3. 4 ...
... The transmission delay of direct routing from Equation (5.4), i.e., T ra n 1 ,n X , depends on the bandwidth limit of the Internet provider. Notably, since the bandwidth may be shared among many users, the effective bandwidth of T ra n 1 ,n X can be lower than the limit, based on the overall network traffic which depends on the current network load [146]. This phenomenon can cause low bandwidth in T ra n 1 ,n X , especially when n X is a remote cloud compute node [171]. ...
Thesis
Full-text available
Fog computing is a novel computing paradigm which enables the execution of applications on compute nodes which reside both in the cloud and at the edge of the network. Various performance benefits, such as low communication latency and high network bandwidth, have turned this paradigm into a well-accepted extension of cloud computing. So far, many fog computing systems have been proposed, consisting of distributed compute nodes which are often organized hierarchically in layers. Such systems commonly rely on the assumption that the nodes of adjacent layers reside close to each other, thereby achieving low latency computations. However, this assumption may not hold in fog computing systems that span over large geographical areas, due to the wide distribution of the nodes. In addition, most proposed fog computing systems route the data on a path which starts at the data source, and goes through various edge and cloud nodes. Each node on this path may accept the data if there are available resources to process this data locally. Otherwise, the data is forwarded to the next node on path. Notably, when the data is forwarded (rather than accepted), the communication latency increases by the delay to reach the next node. This thesis aims at tackling these problems by proposing distributed algorithms whereby the compute nodes measure the network proximity to each other, and self-organize accordingly. These algorithms are implemented on geographically distributed compute nodes, considering image processing and smart city use cases, and are thoroughly evaluated showing significant latency- and bandwidth-related performance benefits. Furthermore, we analyze the communication latency of sending data to distributed edge and cloud compute nodes, and we propose two novel routing approaches: i) A context-aware routing mechanism which maintains a history of previous transmissions, and uses this history to find nearby nodes with available resources. ii) edgeRouting, which leverages the high bandwidth between nodes of cloud providers in order to select network paths with low communication latency. Both of these mechanisms are evaluated under real-world settings, and are shown to be able to lower the communication latency of fog computing systems significantly, compared to alternative methods.