Figure 4 - uploaded by Minh Le
Content may be subject to copyright.
Image processing: performance comparison of the edge server-and P2P-based execution model. Then, to evaluate our peer selection approach, we compared the estimated execution time with the actual time taken by the P2P collaboration in five scenarios, which engage between 1 and 5 devices. Figure 5 (left) shows the estimated and actual lines having the same trend and close values. We found that this trend approximation also occurs when starting the service execution from different devices in the P2P network. Figure 5 (right) describes the similar trends on 3 devices with different resource capacities, when requesting the same service from different mobile device in the P2P network.

Image processing: performance comparison of the edge server-and P2P-based execution model. Then, to evaluate our peer selection approach, we compared the estimated execution time with the actual time taken by the P2P collaboration in five scenarios, which engage between 1 and 5 devices. Figure 5 (left) shows the estimated and actual lines having the same trend and close values. We found that this trend approximation also occurs when starting the service execution from different devices in the P2P network. Figure 5 (right) describes the similar trends on 3 devices with different resource capacities, when requesting the same service from different mobile device in the P2P network.

Context in source publication

Context 1
... measured the total execution time of the client that elapsed between the initiation of the service request and the arrival of all results. Figure 4 shows that the edge server-based and P2P-based service execution. Although the edge serverbased service execution outperforms the performance of P2P-based service executions, as the number of mobile devices increases, the performance of the P2P-based service model is also significantly improved. ...

Citations

... Improved Reliability and Continuity: Edge computing enhances reliability by enabling applications to operate autonomously even when connectivity to centralized cloud services is disrupted [42]. Edge nodes and devices continue to function and process data locally, ensuring uninterrupted operations in remote or challenging environments with unreliable network connectivity [43]. This capability is crucial for mission-critical applications in industries such as manufacturing, transportation, and utilities. ...
Preprint
Full-text available
Edge computing has emerged as a transformative data processing method by decentralizing computations and bringing them toward the data source, significantly reducing latency and enhancing response times. However, this shift introduces unique security challenges, especially within the detection and prevention of cyberattacks. This paper gives a comprehensive evaluation of the edge security landscape in peripheral computing, with specialized expertise in identifying and mitigating various types of attacks. We explore the challenges associated with detecting and preventing attacks in edge computing environments, acknowledging the limitations of existing approaches. One of the very interesting novelties that we include in this survey article is, that we designed a Web application that runs on an edge network and simulates SQL injection attacks—a common threat in edge computing. Through this simulation, we examined every one of the cleanup strategies used to discover and prevent such attacks using input sanitization techniques, ensuring that the malicious SQL code turned neutralized. Our studies contribute to deeper know-how of the security landscape in edge computing by providing meaningful insights into the effectiveness of multiple prevention strategies.
... While the stalwart TCP/IP suite has provided a foundation for network communication, managing each data packet's integrity becomes increasingly complex in the distributed and varied environments of edge-cloud systems. This complexity is echoed in the works of scholars such as [28], who advocate for architectures that dynamically adapt to network disconnections, and [29], who propose scalable blockchain models to enhance security and efficiency in IoT applications. The PAIR Mechanism aligns with these concepts, focusing on a higher-level goal of achieving consistent data states rather than managing the minutiae of transmission. ...
Article
Full-text available
This study presents a newly developed edge computing platform designed to enhance connectivity between edge devices and the cloud in the agricultural sector. Addressing the challenge of synchronizing a central database across 850 remote farm locations in various countries, the focus lies on maintaining data integrity and consistency for effective farm management. The incorporation of a new edge device into existing setups has significantly improved computational capabilities for tasks like data synchronization and machine learning. This research highlights the critical role of cloud computing in managing large data volumes, with Amazon Web Services hosting the databases. This paper showcases an integrated architecture combining edge devices, networks, and cloud computing, forming a seamless continuum of services from cloud to edge. This approach proves effective in managing the significant data volumes generated in remote agricultural areas. This paper also introduces the PAIR Mechanism, which is a solution developed in response to the unique challenges of agricultural data management, emphasizing resilience and simplicity in data synchronization between cloud and edge databases. The PAIR Mechanism’s potential for robust data management in IoT and cloud environments is explored, offering a novel perspective on synchronization challenges in edge computing.
... In particular, the system features a local gateway that collects the available microservices, provided by mobile and IoT devices. For a given mobile service request with reliability, trustworthiness, and QoS-optimality requirements [37], the gateway orchestrates the combined execution of equivalent microservices, provided by mobile and IoT devices, which can be unreliable and untrustworthy [36]. ...
Article
Full-text available
A QoS-optimal service balances reliability, execution cost, and latency to satisfy application requirements. In emerging distributed environments, with their unreliable and resource-scarce mobile/IoT devices, it is hard but essential to optimize the QoS of mobile services. Fortunately, these environments are characterized by ever-growing equivalent functionalities that satisfy the same requirements by different means. The combined execution of equivalent microservices has been used to improve QoS (e.g., majority voting for accuracy, speculative parallelism for latency, and failover for reliability). These executions are commonly described as workflow patterns, crude-grained recurring interactions across microservices within a service. However, as the number of equivalent microservices grows, applying a crude-grained pattern may cause severely unbalanced QoS, while nesting these patterns is convoluted to implement and expensive to maintain. In this article, we introduce a novel workflow meta-pattern for defining fine-grained workflow patterns that describe QoS-optimal combined executions of equivalent microservices. The meta-pattern employs a domain-specific algebraic expression to specify the invocation sequences of equivalent microservices, and a Boolean function to determine whether to terminate the execution. To evaluate the applicability of our meta-pattern, we build a Scala functional programming library, by which we further develop edge computing and cognitive service applications. Our experiments show that applying our meta-pattern to define such workflow patterns saves programmer effort, while the resulting patterns effectively improve the QoS of distributed applications.
... In a highly dynamic network [27], it may not be possible to obtain the status of all nodes (due to other unexpected failures or network link breakages and congestion) instantaneously, or else it may be time and bandwidth costly to contact the master directly and retrieve that information. Therefore, in that case, the newcomer node needs to adapt the best guessing strategy and choose among the multiple repair options to complete the repair process (either exactly or functionally) as quickly as possible using minimum network resources. ...
Article
Full-text available
The guesswork refers to the distribution of the minimum number of trials needed to guess a realization of a random variable accurately. In this study, a non-trivial generalization of the guesswork called guessing cost (also referred to as cost of guessing) is introduced, and an optimal strategy for finding the ρ-th moment of guessing cost is provided for a random variable defined on a finite set whereby each choice is associated with a positive finite cost value (unit cost corresponds to the original guesswork). Moreover, we drive asymptotically tight upper and lower bounds on the logarithm of guessing cost moments. Similar to previous studies on the guesswork, established bounds on the moments of guessing cost quantify the accumulated cost of guesses required for correctly identifying the unknown choice and are expressed in terms of Rényi’s entropy. Moreover, new random variables are introduced to establish connections between the guessing cost and the guesswork, leading to induced strategies. Establishing this implicit connection helped us obtain improved bounds for the non-asymptotic region. As a consequence, we establish the guessing cost exponent in terms of Rényi entropy rate on the moments of the guessing cost using the optimal strategy by considering a sequence of independent random variables with different cost distributions. Finally, with slight modifications to the original problem, these results are shown to be applicable for bounding the overall repair bandwidth for distributed data storage systems backed up by base stations and protected by bipartite graph codes.
... Beyond the ETSI architecture, alternative edge computing architectures have been proposed in order to improve dependability. One architecture aims to deliver failure resistant and efficient applications [185]. Another work proposes a dependable edge computing architecture customized for smart construction [186]. ...
Article
Full-text available
The Fifth Generation (5G) of mobile networks offers new and advanced services with stricter requirements. Multi-access Edge Computing (MEC) is a key technology that enables these new services by deploying multiple devices with computing and storage capabilities at the edge of the network, close to end-users. MEC enhances network efficiency by reducing latency, enabling real-time awareness of the local environment, allowing cloud offloading, and reducing traffic congestion. New mission-critical applications require high security and dependability, which are rarely addressed alongside performance. This survey paper fills this gap by presenting 5G MEC’s three aspects: security, dependability, and performance. The paper provides an overview of MEC, introduces taxonomy, state-of-the-art, and challenges related to each aspect. Finally, the paper presents the challenges of jointly addressing these three aspects.
... Compared to mobile cloud computing (MCC) [6], MEC is faced with some unique uncertainties in the edge server placement as follows. Firstly, unlike the highly reliable large data center in MCC, MEC networks are usually heterogeneous and edge servers are less reliable [7], [8], [9], [10], [11], [21], e.g., some edge servers can crash at any time [12]. Secondly, different from MCC tasks mostly being offloaded through reliable wired links, MEC tasks are usually offloaded to edge servers through failure-prone wireless links, as edge servers are generally co-located with the cellular base stations (BSs) or local wireless access points (APs) [2]. ...
... In this work, we aim to maximize the overall served workload of user requests in the long term, in the presence of possible edge server failures. Compared to the stable and reliable cloud servers, edge servers are heterogeneous and may fail [12], [7]. In other words, we aim to determine a server placement scheme in a robustness manner faced with server failures. ...
... where (Ω, G p ) is the p-independence system formed by constraints of (11) and (12) and k ≤ S − 1 is the maximum number of server failures that controls the degree of robustness. In practice, correlation among neighbor server failures may exist because of shared key configuration or similar environment between neighboring servers. ...
Article
Full-text available
In this work, we study the problem of \underline{R}obust \underline{S}erver \underline{P}lacement (RSP) for edge computing, \emph{i.e.}, in the presence of uncertain edge server failures, how to determine a server placement strategy to maximize the expected overall workload that can be served by edge servers. We mathematically formulate the RSP problem in the form of robust max-min optimization, derived from two consequentially equivalent transformations of the problem that does not consider robustness and followed by a robust conversion. RSP is challenging to solve, because the explicit expression of the objective function in RSP is hard to obtain, and it is a robust max-min problem with knapsack constraints, which is still an unexplored problem in the literature. We reveal that the objective function is monotone submodular, and propose two solutions to RSP. Firstly, after proving that the involved constraints form a p-independence system constraint, where p is a parameter determined by the ratio of the coefficients in the knapsack constraints, we propose an algorithm that achieves a provable approximation ratio in polynomial time. Secondly, we prove that one of the knapsack constraints is a matroid contraint, and propose another polynomial time algorithm with a better approximation ratio.
... e) Dependable architecture: Beyond the ETSI architecture, alternative edge computing architectures have been proposed in order to improve dependability. One architecture aims to deliver failure resistant and efficient applications [155]. Another work proposes a dependable edge computing architecture customized for smart construction [156]. ...
Preprint
The main innovation of the Fifth Generation (5G) of mobile networks is the ability to provide novel services with new and stricter requirements. One of the technologies that enable the new 5G services is the Multi-access Edge Computing (MEC). MEC is a system composed of multiple devices with computing and storage capabilities that are deployed at the edge of the network, i.e., close to the end users. MEC reduces latency and enables contextual information and real-time awareness of the local environment. MEC also allows cloud offloading and the reduction of traffic congestion. Performance is not the only requirement that the new 5G services have. New mission-critical applications also require high security and dependability. These three aspects (security, dependability, and performance) are rarely addressed together. This survey fills this gap and presents 5G MEC by addressing all these three aspects. First, we overview the background knowledge on MEC by referring to the current standardization efforts. Second, we individually present each aspect by introducing the related taxonomy (important for the not expert on the aspect), the state of the art, and the challenges on 5G MEC. Finally, we discuss the challenges of jointly addressing the three aspects.
... In dynamic and volatile environments (e.g. disaster zones) where connectivity to traditional edge processing clouds may be disrupted, the authors of [18] used P2P as a fallback mechanism for discovering neighbouring resources to enable collaborative processing of data. In [19] and [20], the authors investigated the scheduling challenges in dynamic topologies with numerous mobile edge servers. ...
Conference Paper
Full-text available
With the continuous growth in the number of mobile networked devices, and their rapidly improving compute capabilities, it has become possible to harness them as an extended cloud. This presents a clear opportunity to place latency-sensitive applications and services at the edge. As applications are increasingly based on the microservices and Network Function Virtualization (NFV) architectures, their overall performance will depend on the location of their constituent microservices relative to one-another. An extended cloud comprising mobile devices therefore results in a dynamic network, making it difficult for traditional orchestration systems in distant clouds to perform timely management and replacement of microservices to ensure the overall application or service is performant. We propose to address this challenge by decentralizing the service discovery and allocation logic, placing it in client microservices. This paper presents a P2P-based design and prototype system that empowers clients to discover desired services based on pre-defined QoS requirements. If none are found, clients identify compute nodes meeting the requirements to request a new service allocation.
... The operational cost of these "truck rolls" can quickly overwhelm the capital cost of redundant resources at the edge, and is estimated to be more than one thousand dollars per event [22]. On the other hand, unlike central data centers, edge data centers are likely to be restricted by environmental factors such as space and power, the stability of network connections, and temperature [2,9]. These factors make the expense of provisioning and operating redundant resources at the edge competing with the cost of truck rolls. ...
Conference Paper
Full-text available
In the resource-rich environment of data centers most failures can quickly failover to redundant resources. In contrast, failure in edge infrastructures with limited resources might require maintenance personnel to drive to the location in order to fix the problem. The operational cost of these"truck rolls" to locations at the edge infrastructure competes with the operational cost incurred by extra space and power needed for redundant resources at the edge. Computational storage devices with network interfaces can act as network-attached storage servers and offer a new design point for storage systems at the edge. In this paper we hypothesize that a system consisting of a larger number of such small "embedded" storage nodes provides higher availability due to a larger number of failure domains while also saving operational cost in terms of space and power. As evidence for our hypothesis, we compared the possibility of data loss between two different types of storage systems: one is constructed with general-purpose servers, and the other one is constructed with embedded storage nodes. Our results show that the storage system constructed with general-purpose servers has 7 to 20 times higher risk of losing data over the storage system constructed with embedded storage devices. We also compare the two alternatives in terms of power and space using the Media-Based Work Unit (MBWU) that we developed in an earlier paper as a reference point.
... Edge nodes are without doubt a point of failure in any decentralised cloud network. Le et al. give a solution to partial failures in MEC (e.g., connectivity loss) between the edge nodes [63]. Their architecture is again hierarchical, with mobile nodes storing local backup data dispersed among them. ...
Article
Cloud infrastructures are highly favoured as a computing delivery model worldwide, creating a strong societal dependence. It is therefore vital to enhance their resilience, providing persistent service delivery under a variety of conditions. Cloud environments are highly complex and continuously evolving. Additionally, the plethora of use-cases ensures requirements for persistent service delivery vary. As a contribution to knowledge, this work surveys resilience techniques for cloud environments. We apply a novel perspective using a layered model of traditional and emerging cloud paradigms. Works are then classified according to the Resilinets model. For each layer, the most common techniques with limitations are derived including an actor’s strength in influencing resilience in the cloud with each technique. We conclude with some future challenges to the field of resilient cloud computing.