Article

Edge Computing: Vision and Challenges

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The proliferation of Internet of Things and the success of rich cloud services have pushed the horizon of a new computing paradigm, Edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of Edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative Edge to materialize the concept of Edge computing. Finally, we present several challenges and opportunities in the field of Edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Edge computing steers to an straightforward ecosystem where one reliable province interacts with other reliable domains and a large number of clients are provided. Although there are many edge paradigms with little difference, the lies are also the same [6]. ...
... For example, a person with a mobile phone going from one location to another then requires an immense number of cloud servers with concise response time and also depends on the stability of the cloud nodes. It initiates a lot of cloud-based research near the end of the network [6]. As the calculation is finished provincially, system functioning can be enhanced with very little response time. ...
... Instead of considering explicit data to do the job, DL uses data presentations. The data is organized into a system with invisible presentations that allow for the study of positive features [6]. ...
Article
Full-text available
The Evolutionary algorithm detects the top problem solution provided by the natural testing process, which is used to solve the problem of multiple optimizations on the computer edge. The machine learning algorithm is employed to handle the most arduous challenges by consistently building a model from observation (reinforcement learning) or training data. Evolutionary algorithm as a genetic algorithm can solve a number of end-to-end computer research issues such as job planning. Machine learning method transforms the computer development problem at the edges into partitions or retreats or problems to make smarter decisions and solve, for example, the right download decision. Advanced Reading can be used without predefined data with a traffic forecasting label, which can be used for resource allocation and uploading. This paper focuses on Machine Learning, Edge Computing, Research Challenges, Mathematical Models and Applications.
... Edge Computing is a new computing paradigm that proposes the processing of the data produced on the edge of the network in this same edge. The main idea is to carry out data processing near the data source, saving latency time, bandwidth, power consumption, and even, cloud outages [4] [5]. In addition, services can be provided at the edge of the network [3]. ...
... As the amount of data transmitted is too large, this process will lead to huge unnecessary bandwidth and computing resource usage. Moreover, privacy protection requirements must be considered [5]. ...
Conference Paper
Full-text available
During these last years, the use of embedded systems has grown exponentially, mainly due to the expansion of the Internet of Things (IoT). Data collected by IoT devices are sent to the cloud to be processed in datacenters. Edge Computing philosophy wants to change this “passive” behavior of IoT devices. The basic idea is to process data produced by IoT devices closer to where they were created, instead of sending them through long routes. New challenges have emerged with the change to the Edge Computing philosophy. One of them is reliability. IoT devices have been built with low-reliable components, reduced weight and volume, and not very high computing and memory capacity for low power consumption. With these conditions, how can we rely on the results obtained by these devices? In this work, we have tried to answer this question by analyzing the effects of the inclusion of different software-implemented Error Correction Codes in real embedded systems, typically used in IoT.
... Other examples involve predictive analytics that can guide equipment maintenance and repair before actual defects or failures occur. Still other examples are often aligned with utilities, such as water treatment or electricity generation, to ensure that equipment is functioning properly and to maintain the quality of output (Shi et al., 2016). Through the investigation of this paper the authors will prove that the use of Edge computing instead of fog computing or cloud computing will pave the way for new era of using WSN extensively in the near future. ...
... The use of edge computing for fixed systems can reduce the cost of data transmission and processing cost in the long run due to initial hardware costs. In systems whose peripherals are mobile, the cost becomes one of the challenges in which the feasibility study is carried out, and therefore the cost may be greater (Hassan et al., 2018;Shi et al., 2016) a Cloud Assisted Mobile Edge computing (CAME) framework is proposed to minimize mobile edge computing delay and cost by Applying queueing network and convex optimization theories, in which cloud resources are leased to enhance the system computing capacity (Ma et al., 2017). A replication management system which includes dynamic replication creator, a specialized cost-effective scheduler for data placement to reduce data access costs while meeting deadline constraint by using data scheduling for the workflows is modeled as an integer programming problem (Shao et al., 2019). ...
Article
Full-text available
Data has become the lifeblood of current technology, and with the expanded reliance on technology, the need has increased for technical devices to connect to the surrounding environment and collect data from it and send it for analysis and processing. This is also due to limited bandwidth capacity. In view of the increased need to survey the research that focused on the challenges that appeared with the wide spread of the use of edge computing with the Wireless sensor networks (WSN), the researcher find that many aspects have been covered by researchers, but some of the aspects need to work on them furthermore like using artificial intelligence on the edge (smart edge) and security challenges. the use of a WSN provides many benefits like overcoming bandwidth limitation, scalability, real-time response and mobility. The interest in making the processing take place at a node and not in a central server or on the cloud is due to the slow development in communication technology compared to the growth of processing technology, so the price of the bandwidth package still costs a large amount compared to the price of data processing at the edge of the network. The growth of the battery development sector that lasts for a long time has made new horizons grow new ideas in using the wireless sensor network in a more efficient manner and in more fields.
... The centralized and remote architectures (Cloud Computing) used so far do not seem able to meet the QoS requirements of these new C-ITS applications [8]. Among the solutions that could be considered, distributed computing architectures, called Edge Computing [9], seem to be able to play an essential role in the future of C-ITS [10]. Indeed, these architectures, widely promoted by mobile network operators, guarantee low latencies for high bandwidth and computationally demanding applications. ...
... This is why new data processing architectures have been proposed to address the limitations of the Cloud model. The most widely considered solution today is Edge Computing [9]. The central idea of the Edge approach is to deploy Edge Servers as close as possible to users (cf. Figure 2). ...
Conference Paper
Full-text available
Data processing is a major challenge for Cooperative Intelligent Transport Systems (C-ITS). Indeed, some applications related to connected and autonomous vehicles (CAV) are driving the development of ever more complex C-ITS services: automated driving, remote driving, platooning, etc. However, these applications have significant constraints in terms of Quality of Service (QoS): latency, reliability, bandwidth, etc. The data processing architecture that will be deployed for these C-ITS services must therefore be able to guarantee short delays both in terms of data analysis and processing, as well as a high level of availability. Without this condition, the proper functioning of C-ITS services will not be guaranteed, and the consequences could be significant for road safety and traffic fluidity. Therefore, in this article, we propose to categorize C-ITS applications and highlighting their requirements in terms of Quality of Service. We will then study the existing architectural solutions for data processing (Cloud Computing, Edge Computing, etc.) and identify their advantages and drawbacks. As a third step, we will associate C-ITS applications and data processing technologies to propose a complete processing architecture ensuring an efficient operation of all the C-ITS applications. We will also define in this section some use cases proving the relevance of the proposed architecture. Finally, we will highlight the current and future challenges of implementing these new data processing architectures.
... Edge computing can significantly reduce latency and increase processing speeds by bringing computing power close to the data source. It is an optimal solution for systems that require real-time data processing and decision making [5,6]. In healthcare, edge computing is particularly useful in applications that require real-time monitoring and decision making, such as wearable devices for patient monitoring and remote diagnosis and treatment [7,8]. ...
... AI is playing a central role in healthcare, unleashing its potential across a broad spectrum that includes disease diagnosis, treatment modalities, personalized medicine, and continuous monitoring of patients' well-being. On the other hand, edge computing is a distributed computing architecture that brings computing power closer to the data source [5,6,22]. In today's dynamic landscape, edge computing has emerged as a timely and common-sense solution to meet the increasing need for fast data processing and rapid decision making in a variety of domains. ...
Article
Full-text available
Edge AI, an interdisciplinary technology that enables distributed intelligence with edge devices, is quickly becoming a critical component in early health prediction. Edge AI encompasses data analytics and artificial intelligence (AI) using machine learning, deep learning, and federated learning models deployed and executed at the edge of the network, far from centralized data centers. AI enables the careful analysis of large datasets derived from multiple sources, including electronic health records, wearable devices, and demographic information, making it possible to identify intricate patterns and predict a person’s future health. Federated learning, a novel approach in AI, further enhances this prediction by enabling collaborative training of AI models on distributed edge devices while maintaining privacy. Using edge computing, data can be processed and analyzed locally, reducing latency and enabling instant decision making. This article reviews the role of Edge AI in early health prediction and highlights its potential to improve public health. Topics covered include the use of AI algorithms for early detection of chronic diseases such as diabetes and cancer and the use of edge computing in wearable devices to detect the spread of infectious diseases. In addition to discussing the challenges and limitations of Edge AI in early health prediction, this article emphasizes future research directions to address these concerns and the integration with existing healthcare systems and explore the full potential of these technologies in improving public health.
... To satisfy IoVT's requirements on the computing capability, the authors in [9] propose a cloud-based solution of realtime 3D visualization of outdoor scenes in IoVT, where the optimization algorithms are executed in the clouds to calculate the location of moving objects and detect abnormal events. By deploying servers in wireless access networks to users, mobile edge computing (MEC) can offer IoVT devices robust computing, storage, networking, and communication capabilities [10]. A few studies have been done on resource allocation optimization in IoVT with MEC [2], [11]- [13], where IoVT devices offload visual processing tasks to edge servers to obtain better quality of service (QoS). ...
... Create graph G(V e , E e ; π e );6 Allocate the transmit power of IoVT devices according to Algorithm 1 and obtain p * ; 7 Each MEC-BS allocates the computing resources to the associated IoVT devices by solving the problem (21) and obtains C * ;8 Calculate the adjacent matrix Φ of graph G(V e , E e ; π e ).9 Find the negative loops in graph G(V, E; π e ) according to EBFSA or FGSA;10 if a negative loop is founded L then11 Update A * , B * according to L;12 end 13 until There is no negative loop in the graph or reach the maximum iteration steps; 14 return The IoVT device association, grouping, power allocation and computing resource allocation strategies A * , B * , C * , p * Computational Complexity Analysis: In Algorithm 1, the optimization variable is an N -dimensional vector, thus the worse-case complexity of Algorithm 1 is O(N 3 ). When solving problem (16), the optimization variable is a matrix of M rows and N columns, so the worse-case computational com-plexity is O(M 3 N 3 ). ...
Preprint
Full-text available
p>Internet of Video Things (IoVT) brings much higher requirements on the transmission and computing capabilities of wireless networks than traditional Internet of Things (IoT). Non-orthogonal multiple access (NOMA) and mobile edge computing (MEC) have been considered as two promising technologies to satisfy these requirements. However, successive interference cancellation (SIC) and grouping operations in NOMA as well as delay sensitive IoVT video tasks with different priorities make it challenging to achieve the optimal performance in NOMA-assisted IoVT with MEC. To address this issue, we formulate a joint optimization problem where both NOMA operations and MEC offloading are involved, with the goal to minimize the weighted average total delay. To tackle such intractable problem, we proposed a graph theory-based optimization framework, then decompose and transform the problem into finding \emph{negative} loops in the weighted directed graph. Specifically, we design a priority-based SIC decoding mechanism and propose convex optimization-based power allocation and computing resource allocation algorithms to calculate the adjacency matrix. Then, two negative loop searching algorithms are adopted to obtain the device association and grouping strategies. Simulation results demonstrate that compared with existing algorithms, the proposed algorithm reduces the weighted average total delay by up to 92.43% as well as improves the transmission rate of IoVT devices by up to 79.1%.</p
... Instead of a centralized Cloud system, data centers of various sizes in the core network, and smaller data-centers or devices at the edge of the network, can be used collaboratively to form a single large-scale geo-distributed system. Thus, Fog Computing is at the crossroads of complementary areas of distributed systems: Cloud Computing [5] of course, but also Edge Computing [25] and IoT [18]. ...
Conference Paper
Full-text available
Fog Computing is a paradigm aiming to decentralize the Cloud by geographically distributing away computation, storage, network resources and related services. It provides several benefits such as reducing the number of bottlenecks, limiting unwanted data movements , etc. However, managing the size, complexity and hetero-geneity of Fog systems to be designed, developed, tested, deployed, and maintained, is challenging and can quickly become costly. According to best practices in software engineering, verification tasks could be performed on system design prior to its actual implementation and deployment. Thus, we propose a generic model-based approach for verifying Fog systems at design time. Named VeriFog, this approach is notably based on a customizable Fog Modeling Language (FML). We experimented with our approach in practice by modeling three use cases, from three different application domains, and by considering three main types of non-functional properties to be verified. In direct collaboration with our industrial partner Smile, the approach and underlying language presented in this paper are necessary steps towards a more global model-based support for the complete life cycle of Fog systems.
... Addressing the challenge of real-time detection of rail damage with a high-speed train as a moving carrier poses significant difficulties. As mentioned in [17], "Edge AI technology is an enabling technology that allows computation at the edge." Edge AI is renowned for its small size, low power consumption, and excellent performance in terms of recognition speed and accuracy. ...
Article
Full-text available
Railway track malfunctions can lead to severe consequences such as train derailments and collisions. Traditional manual inspection methods suffer from inaccuracies and low efficiency. Contemporary deep learning-based detection techniques have challenges in model accuracy, inference speed, and are often associated with expensive computational costs and high power consumption when deployed on devices. We propose an optimized lightweight network based on YOLOv5-lite. which employs an enhanced Fused Mobile Inverted Bottleneck Convolution (BF_MBConv) to reduce the number of parameters and floating-point operations (FLOP) during backbone feature extraction. The Squeeze-and-Excitation (SE) mechanism is adopted, emphasizing more critical track features by assigning different weights from a channel-wise perspective. Utilizing DropBlock with holistic dropping as a substitute for Dropout with random dropping offers a more efficient means of discarding redundant features. In the neck section, Shuffle convolution replaces the conventional one, significantly reducing the parameter count while better integrating feature information post-group convolution. Lastly, the incorporation of Focal-EIoU Loss augments regression, and with the application of incremental dataset processing techniques, it addresses accuracy and sample imbalance issues. The refined algorithm achieves a mean Average Precision (mAP)@0.5 of 94.4%, marking an 8.13% improvement over the original YOLOv5-lite. Moreover, by leveraging the embedded platform integrated with the Intel® Movidius™ Neural Compute Stick cluster as the portable device for model deployment, Achieved a frame rate of 18.7 FPS. Our findings indicate that this approach can efficiently and accurately detect railway track damages. Additionally, it addresses the previously overlooked issues of performance-cost trade-offs, countering the past trend of prioritizing high performance at the expense of elevated power consumption and costs, proposing a harmonized approach that prioritizes efficiency and affordability.
... The edge computing mode can be used for real-time data processing. The data is preprocessed before uploading to the cloud, which relieves the pressure on the data center, improves the security and privacy of the data at the edge, and reduces bandwidth costs and energy consumption [3]. Nevertheless, many complex computing tasks (e.g., image recognition) in the real world are difficult to handle based on the traditional von Neumann architecture [4]. ...
Article
The processing of nonstructural visual data requires not only computers to perform efficient calculations but also edge visual perception devices to access and process information effectively. Therefore, it is necessary to develop low-power neuromorphic optical sensors with integrated sensing, memory and processing functions. The neuromorphic optical sensors can be achieved by combining photoreceptors with general neuromorphic devices, or by developing neuromorphic photodetectors. In this review, we first introduce the working mechanism and basic parameters of the general neuromorphic devices. Then, the neuromorphic photodetectors are summarized, including their classifications, working mechanisms and applications. Neuromorphic devices have shown great potential in visual perception, but there are still some directions to be further explored, such as more microscopic explanation in mechanisms, more stable multifunctional devices, and more abundant application scenarios.
... To overcome these issues, edge computing has been proposed as one of the key technologies in 5G application scenarios. Unlike centralized processing, edge computing enables user devices to offload computation tasks directly to edge computing servers (ECSs), reducing the data traffic on the network bandwidth and improving system responsiveness, thus better meeting the low-latency and high-reliability service requirements of 5G data networks [4][5][6][7]. By deploying ECSs, which act as micro-clouds, in ground infrastructure, edge computing brings computing resources closer to service users, reducing latency and energy consumption in long-distance data transmission processes. ...
Article
Full-text available
In next-generation mobile communication scenarios, more and more user terminals (UEs) and edge computing servers (ECSs) are connected to the network. To ensure the experience of edge computing services, we designed an unmanned aerial vehicle (UAV)-assisted edge computing network application scenario. In the considered scenario, the UAV acts as a relay node to forward edge computing tasks when the performance of the wireless channel between UEs and ECSs degrades. In order to minimize the average delay of edge computing tasks, we design the optimization problem of joint UE–ECS matching and UAV three-dimensional hovering position deployment. Further, we transform this mixed integer nonlinear programming into a continuous-variable decision process and design the corresponding Proximal Policy Optimization (PPO)-based joint optimization algorithm. Sufficient data pertaining to latency demonstrate that the suggested algorithm can obtain a seamless reward value when the number of training steps hits three million. This verifies the algorithm’s desirable convergence property. Furthermore, the algorithm’s efficacy has been confirmed through simulation in various environments. The experimental findings ascertain that the PPO-based co-optimization algorithm consistently attains a lower average latency rate and a minimum of 8% reduction in comparison to the baseline scenarios.
... Such massive, huge, highspeed, and changeable data has brought great challenges to the traditional cloud computing service model. Edge computing and mobile edge computing are committed to solving the above problems by real-time processing of data at the edge of the network to provide users with edge intelligent services "near" [2]. ...
Article
Full-text available
Driven by the demand for ubiquitous connection in the Internet of everything era, this paper introduces a new low earth orbit (LEO) satellite-based multi-access edge computing fusion architecture. The architecture regards LEO satellites deployed with high-performance edge modules as superior nodes, which can provide onboard processing mode edge task offloading service for users in complex regions. At the same time, the superior node also has the offloading decision and can offload some edge user tasks to the edge service center in the ground network. Based on differential game theory, we propose a two-stage computing resource purchase strategy and task-offloading resource pricing strategy to ensure the minimum cost of edge computing services for edge users and the maximum benefit for edge service providers and prove the existence of a unique Nash equilibrium solution using Piccard theorem. An algorithm based on the Runge–Kutta method is designed to solve Nash equilibrium, and the simulation results show the effectiveness of the proposed method.
... Several analyses have been conducted on an edge computing platform that proves edge computing is a good solution for cooperation with cloud, network communication, and edge equipment (Chen et al. 2018, Martin Fernandez et al. 2018, Raza et al. 2019). This approach offers several advantages, including reduced latency, bandwidth optimization, enhanced privacy and security, offline operation (Hassan et al. 2019, Shi et al. 2016, Varghese et al. 2016. Also, one of the most important things in edge is data privacy, reduced attack surface, local threats, communication security, trustworthiness of edge devices. ...
Article
Full-text available
This paper aims to present a modeling approach for the seamless data streaming process from smart IoT systems to Apache Kafka, leveraging the MQTT protocol. The paper begins by discussing the concept of real-time data streaming, emphasizing the need to transfer data from IoT/edge devices and sensors to Apache Kafka in a timely manner. The second part consists of a literature overview that shows the analysis and systematization of different types of architectures in the broad sense of crowdsensing, followed by specific architectures regarding edge and cloud computing. The methodology section will propose an infrastructure and data streaming architecture for smart environment services, such as air quality monitoring. Lastly, a discussion about results and future development will be shown in the last two sections. The proposed integration approach offers several advantages, including efficient and scalable data streaming, real-time analytics, and enhanced data processing capabilities. Keywords: real-time data streaming, smart healthcare, Apache Kafka, data integration, stream processing
... Deep learning is the most popular data mining method in recent years and has demonstrated remarkable performance in diverse industrial domains, such as computer vision, automatic speech recognition (ASR), natural language processing (NLP), and intelligent recommendation. Evolving from traditional cloud computing technology, edge computing extends powerful computing resources and efficient services to network edge nodes, reducing network bandwidth and latency while enhancing energy efficiency and privacy protection [1]. At present, ...
Article
Full-text available
Recommendation systems play a pivotal role in improving product competitiveness. Traditional recommendation models predominantly use centralized feature processing to operate, leading to issues such as excessive resource consumption and low real-time recommendation concurrency. This paper introduces a recommendation model founded on deep learning, incorporating edge computing and knowledge distillation to address these challenges. Recognizing the intricate relationship between the accuracy of deep learning algorithms and their complexity, our model employs knowledge distillation to compress deep learning. Teacher–student models were initially chosen and constructed in the cloud, focusing on developing structurally complex teacher models that incorporate passenger and production characteristics. The knowledge acquired from these models was then transferred to a student model, characterized by weaker learning capabilities and a simpler structure, facilitating the compression and acceleration of an intelligent ranking model. Following this, the student model underwent segmentation, and certain computational tasks were shifted to end devices, aligning with edge computing principles. This collaborative approach between the cloud and end devices enabled the realization of an intelligent ranking for product listings. Finally, a random selection of the passengers’ travel records from the last five years was taken to test the accuracy and performance of the proposed model, as well as to validate the intelligent ranking of the remaining tickets. The results indicate that, on the one hand, an intelligent recommendation system based on knowledge distillation and edge computing successfully achieved the concurrency and timeliness of the existing remaining ticket queries. Simultaneously, it guaranteed a certain level of accuracy, and reduced computing resource and traffic load on the cloud, showcasing its potential applicability in highly concurrent recommendation service scenarios.
... Therefore, the development of edge computing has successfully addressed this issue. When the Internet of Vehicles and edge computing are combined, a portion of the vehicle's computing tasks are delegated to the edge service nodes [10], which reduces the user's computing burden, reduces the time it takes for data to be transmitted over networks, and significantly increases the efficiency of data processing [11][12][13]. Additionally, this complies with the Internet of Vehicles' real-time processing standards. ...
Article
With the rapid development of 5G wireless communication and sensing technology, the Internet of Vehicles (IoV) will establish a widespread network between vehicles and roadside infrastructure. The collected road information is transferred to the cloud server with the assistance of roadside infrastructure, where it is stored and made available to other vehicles as a resource. However, in an open cloud environment, message confidentiality and vehicle identity privacy are severely compromised, and current attribute-based encryption algorithms still burden vehicles with large computational costs. In order to resolve these issues, we propose a message-sharing scheme in IoV based on edge computing. To start, we utilize attribute-based encryption techniques to protect the communications being delivered. We introduce edge computing, in which the vehicle outsources some operations in encryption and decryption to roadside units to reduce the vehicle's computational load. Second, to guarantee the integrity of the message and the security of the vehicle identity, we utilize anonymous identity-based signature technology. At the same time, we can batch verify the message, which further reduces the time and transmission of verifying a large number of message signatures. Based on the computational Diffie-Hellman problem, it is demonstrated that the proposed scheme is secure under the random oracle model. Finally, the performance analysis results show that our work is more computationally efficient compared to existing schemes and is more suitable for actual vehicle networking.
... subject to (1). Here, r represents the discount rate. ...
Preprint
Full-text available
The dynamic changes of mobile terminals have led to the more complex environment for edge computing resource allocation. Edge nodes are generally mobile wireless devices or network devices with limited processing capacity, and have relatively limited computing resources. The importance of computing tasks for terminal devices changes dynamically , and the corresponding computing efficiency also changes. Therefore , the dynamic resource allocation is particularly important in the edge computing environments. In order to maximize the efficiency of using edge computing resources, increase the profit of edge node, and reduce the cost of edge terminal devices, exploring an efficient dynamic resource pricing and allocation mechanism for edge computing is an urgent problem that we need to solve. In this paper, 1 Springer Nature 2021 L A T E X template Dynamic Pricing in Edge computing Resource Allocation Based on Stackelberg Dynam we describe the dynamic pricing and resource allocation problem between edge node and terminal devices as a Stackelberg dynamic game model, and solve the Nash equilibrium solution of the model through Bellman dynamic programming theory. During the resource allocation process, to get the optimal computing resource price of edge node and the optimal computing resource usage of terminal devices.
... Edge computing has been offered as a cloud storage solution for data sources that are mostly located outside the cloud [6], [7]. The technology has the storage capabilities of end devices to bring the training model closer to where data is produced [8], [9]. The cloud server, end devices, and edge nodes make up the edge-cloud computing network [1], [10]. ...
Article
Full-text available
In today’s world, the importance of the Green Internet of Things (GIoT) in the transformed sustainable smart cities cannot be overstated. For a variety of applications, the GIoT may make use of advanced machine learning (ML) methodologies. However, owing to high processing costs and privacy issues, centralized ML-based models are not a feasible option for the large data kept at a single cloud server and created by multiple devices. In such circumstances, edge-based computing may be used to increase the privacy of GIoT networks by bringing them closer to users and decentralizing them without requiring a central authority in such circumstances. Nonetheless, enormous amounts of data are stored in a distribution mechanism, and managing them for application purposes remains a difficulty. Hence, federated learning (FL) is one of the most promising solutions for bringing learning to end devices through edge computing without sharing private data with a central server. Therefore, the paper proposes a federated learning-enabled edge-based GIoT system, which seeks to improve the communication strategy while lowering liability in terms of energy management and data security for data transmission. The proposed model uses FL to produce feature values for data routing, which could aid in sensor training for identifying the best routes to edge servers. Furthermore, combining FL-enabled edge-based techniques simplifies security solutions while also allowing for a more efficient computing system. The experimental results show an improved performance against existing models in terms of network overhead, route interruption, energy consumption, and end-to-end delay, route interruption.
... The lack of standardization among IoT devices makes it difficult for them to communicate with each other. This can lead to a fragmented ecosystem where devices cannot be integrated into larger systems, limiting their functionality and usefulness (Shi et al., 2016). ...
Article
Full-text available
This article presents an in-depth analysis of the current and future Internet of Things (IoTs) technologies and applications. A brief overview of the significance of the current state of IoT technologies is discussed. The article examines emerging technologies and techniques such as 5G, edge computing, and AI, that will continue to transform ongoing IoT applications. The article explores the applications of IoT in various industries such as healthcare, transportation, smart cities, and agriculture. The article highlights the security and privacy concerns and solutions to IoT applications in different domains. This article offers some unique predictions on IoT growth in the next 5-10 years and a final discussion on the roles of IoT technologies in enabling Industry 4.0.
... The current era is marked by a burgeoning trend of mobile computing, which has led to a rapid proliferation of mobile devices including cell phones, wearable devices, and industrial sensors. In light of the predictable traffic congestion and rigorous quality of experience requirements that pose significant challenges, researchers have turned to a newly proposed paradigm known as multi-access edge computing (MEC) [1][2][3]. In contrast to the conventional approach of centralizing 1 request processing within cloud data centers, the multi-access edge computing (MEC) paradigm involves decentralized task execution at the network's edge, which is typically facilitated by edge servers. ...
Article
Full-text available
Multi-access Edge Computing (MEC) has emerged as an essential paradigm to address the challenges posed by the proliferation of connected mobile devices. By constructing a MEC-based service system with edge servers in proximity and deploying modules or services on them, these devices can perform complex tasks efficiently with their own resources. However, the significant energy consumption associated with this computing paradigm poses a major obstacle to its widespread adoption. Thus, it is imperative to carefully configure the MEC-based service system to ensure optimal performance and cost-effectiveness. Furthermore, the dynamic nature of the system’s environment or context necessitates that the configuration be adaptable over time to fully utilize limited resources and ensure stability and energy efficiency. In this paper, we present an investigation and model of how mobile devices’ service requests are processed in a MEC-based service system. We propose a reinforcement learning-based algorithm to train a policy that dynamically reconfigures the system to minimize the average service response time while maximizing stability and energy efficiency. Our approach is validated through experiments on the YouTube usage dataset, and we demonstrate that it outperforms the baseline models.
... The most important of them is Edge Computing, in which the computational tasks are performed at the edge of the network, where the data are generated, rather than in a centralized cloud or data center. The main advantages of Edge Computing over traditional cloud computing are (Shi et al. 2016;Cao et al. 2020): ...
Article
Full-text available
Research in short-term traffic forecasting has been blooming in recent years due to its significant implications in traffic management and intelligent transportation systems. The unprecedented advancements in deep learning have provided immense opportunities to leverage traffic data sensed from various locations of the road network, yet significantly increased the models’ complexity and data and computational requirements, limiting the actionability of the models. Consequently, the meaningful representation of traffic flow data and the road network has been highlighted as a key challenge in improving the efficiency, as well as the accuracy and reliability of forecasting models. This paper provides a systematic review of literature dedicated to spatiotemporal traffic forecasting. Three main representation approaches are identified, namely the stacked vector, image/grid, and graph, and are critically analyzed and compared in relation to their efficiency, accuracy and associated modeling techniques. Based on the findings, future research directions in traffic forecasting are proposed, aiming to increase the adoption of the developed models in real-world applications.
Article
Edge computing enabled Intelligent Road Network (EC-IRN) provides powerful and convenient computing services for vehicles and roadside sensing devices. The continuous emergence of transportation applications has caused a huge burden on roadside units (RSUs) equipped with edge servers in the Intelligent Road Network (IRN). Collaborative task scheduling among RSUs is an effective way to solve this problem. However, it is challenging to achieve collaborative scheduling among different RSUs in a completely decentralized environment. In this paper, we first model the interactions involved in task scheduling among distributed RSUs as a Markov game. Given that multi-agent deep reinforcement learning (MADRL) is a promising approach for the Markov game in decision optimization, we propose a collaborative task scheduling algorithm based on MADRL for EC-IRN, named CA-DTS, aiming to minimize the long-term average delay of tasks. To reduce the training costs caused by trial-and-error, CA-DTS specially designs a reward function and utilizes the distributed deployment and collective training architecture of counterfactual multi-agent policy gradient (COMA). To improve the stability of performance in large-scale environments, CA-DTS takes advantage of the action semantics network (ASN) to facilitate cooperation among multiple RSUs. The evaluation results of both the testbed and simulation demonstrate the effectiveness of our proposed algorithm. Compared with the baselines, CA-DTS can achieve convergence about 35% faster, and obtain average task delay that is lower by approximately 9.4%, 9.8%, and 6.7%, in different scenarios with varying numbers of RSUs, service types, and task arrival rates, respectively.
Article
Full-text available
The convergence of Artificial Intelligence (AI) and Blockchain technologies has emerged as a powerful paradigm to address the challenges of data management, security, and privacy in the Edge of Things (EoTs) environment. This bibliometric analysis aims to explore the research landscape and trends surrounding the topic of convergence of AI and Blockchain for EoTs to gain insights into its development and potential implications. For this, research published during the past six years (2018-2023) in the Web of Science indexed sources has been considered as it has been a new field. VoSViewer-based full counting methodology has been used to analyze citation, co-citation, and co-authorship based collaborations among authors, organizations, countries, sources, and documents. The full counting method in VoSViewer involves considering all authors or sources with equal weight when calculating various bibliometric indicators. Co-occurrence, timeline, and burst detection analysis of keywords and published articles were also carried out to unravel significant research trends on the convergence of AI and Blockchain for EoTs. Our findings reveal a steady growth in research output, indicating the increasing importance and interest in AI-enabled Blockchain solutions for EoTs. Further, the analysis uncovered key influential researchers and institutions driving advancements in this domain, shedding light on potential collaborative networks and knowledge hubs. Additionally, the study examines the evolution of research themes over time, offering insights into emerging areas and future research directions. This bibliometric analysis contributes to the understanding of the state-of-the-art in convergence of AI and Blockchain for EoTs, highlighting the most influential works and identifying knowledge gaps. Researchers, industry practitioners, and policymakers can leverage these findings to inform their research strategies and decision-making processes, fostering innovation and advancements in this cutting-edge interdisciplinary field.
Article
Full-text available
The increasing number of network attacks has led to the development of intrusion detection systems. However, these methods often face limitations such as high traffic flow data dimensions, which can reduce attack detection rates and noise sensitivity, affecting anomaly detection performance. This paper introduces a new model based on recurrent deep learning and instance-level horizontal reduction to detect anomalies and network attacks. The model uses nested sliding windows, which move with a specific step in the data and generate a different number of histogram outputs based on the type of anomaly in the data. Evaluation results on five databases show that the proposed model achieves a high accuracy of 99% in detecting different attacks, demonstrating the success of this new approach combined with deep recurrent neural networks in detecting anomalies.
Preprint
Full-text available
The human tendency to measure is deeply rooted in our cognitive processes, survival instincts and the evolution of complex systems. Harmonizing data streams during information measurement ensures accuracy, comparability, and reliability. This facilitates meaningful analysis and informed decision-making, yielding valuable insights. Our hypothesis asserts that a novel MIMO channel design can explore achievable Shannon's capacity for edge AI protocols like CoAP, MQTT, AMQP, and HTTP. MIMO's attributes align with edge AI protocols needs such as higher data rates, reliability, adaptability that are critical for successful processing. However, quantifying edge AI protocols through MIMO for Shannon's capacity remains unexplored. We present a validated mathematical MIMO framework for edge AI protocols using Shannon's capacity, yielding findings up to 20 kbps. Our results verify a customized framework for edge AI protocols, aligned with Shannon's principles. Our mathematical MIMO framework for Shannon's capacity measurement in edge-based smart machines provides a precise, relevant, and informed assessment of edge AI protocol performance. This pioneering effort sets the stage for optimized protocols that conquer edge challenges, ensuring seamless connectivity for smart machines.
Chapter
Developing a medical device or solution involves gathering biomedical data from different devices, which can have different communication protocols, characteristics and limitations. Thus, deploying a test lab to record experiments can be challenging, requiring the synchronisation of the source signals, processing of the information, storing it and extracting conclusions. In this work, we face this problem by developing an edge Internet of Things (IoT) system composed of a Raspberry Pi and an NVIDIA Jetson TX2 device (integrating an NVIDIA Pascal GPU). The information from two biomedical devices (Biosignals Plux and Polar Verity Sense) is synchronised and fused, interpolating the information and extracting features such as mean and standard deviation in real-time. In parallel, the Jetson TX2 device is able to execute a Deep Learning (DL) model in real-time as new data is received using the Message Queuing Telemetry Transport (MQTT) protocol. Also, an online learning approach involving a loss function that takes into account past predictions is proposed, as well as a density-based clustering algorithm that selects the most representative samples of the most repeated class. The system has been deployed in the Smart Home of the University of Almería. Results show that the proposed fusion scheme accuracy represents the intrinsic information of the received data and enables the DL model to run in real time. The next steps involve the deployment of the system in a hospital, in order to monitor epilepsy patients, create a robust dataset and detect epileptic seizures in real-time.
Chapter
The development of next generation wireless communication technology and Artificial Intelligence (AI) techniques to handle massive data has led to the usage of smart systems to improve the quality of human life. One such significant breakthrough is the use of smart healthcare systems. In this project, a Deep Learning (DL) based pathology detection system is suggested. Convolutional Neural Network (CNN) is used to classify and determine whether the signal belongs to a pathological individual or a normal one from the EEG (Electroencephalogram) signal. For this project, publicly available EEG signal data were used. The data signal is preprocessed to remove noise data using a Finite Impulse Response (FIR) filter. The dataset is divided in the ratio of 7:3 into train and test data. Using test data for validation, the model is found to predict at 98.0% accuracy, 96.67% specificity, and 100.0% sensitivity.
Chapter
Deep neural network (DNN) inference service at the edge is promising, but it is still non-trivial to achieve high-throughput for multi-DNN model deployment on resource-constrained edge devices. Furthermore, an edge inference service system must respond to requests with bounded latency to maintain a consistent service-level objective (SLO). To address these challenges, we propose Octopus, a flexible and adaptive SLO-aware progressive inference scheduling framework to support both computer vision (CV) and natural language processing (NLP) DNN models on a multi-tenant heterogeneous edge cluster. Our deep reinforcement learning-based scheduler can automatically determine the optimal joint configuration of 1) DNN batch size, 2) DNN model exit point, and 3) edge node dispatching for each inference request to maximize the overall throughput of edge clusters. We evaluate Octopus using representative CV and NLP DNN models on an edge cluster with various heterogeneous devices. Our extensive experiments reveal that Octopus is adaptive to various requests and dynamic networks, achieving up to a 3.3\(\times \) improvement in overall throughput compared to state-of-the-art schemes while satisfying soft SLO and maintaining high inference accuracy.
Chapter
The Industrial Internet has revolutionized the way businesses operate by integrating Internet technology with industrial processes. With the advent of the Industrial Internet, the integration of data and automation has become possible, creating new opportunities for businesses to optimize their operations and improve their efficiency. Intelligent technologies, including deep learning, attentive model compression, computation sharing, and deep optimization, are driving a transformation in the Industrial Internet. These technologies empower businesses with data-driven insights, enabling them to make informed decisions and streamline their workflows. Deep learning has played a significant role in optimizing storage and computational resources. With the development of deep learning techniques such as neural network compression and quantization, deep learning models can be deployed in industrial equipment in a smaller and more efficient manner without sacrificing accuracy. This allows for more efficient use of storage space and computational resources, which is crucial in industrial applications where resource efficiency is essential. In addition, techniques such as transfer learning allow for the reuse of pre-trained models, further reducing the need for extensive computational resources. Overall, deep learning has helped to make intelligent industrial applications more efficient and cost-effective by optimizing the use of storage and computational resources.
Article
Full-text available
Recent years have witnessed a rising demand for edge computing, and there is a need for methods to decrease the computational cost while maintaining a high learning performance when processing information at arbitrary edges. Reservoir computing using physical dynamics has attracted significant attention. However, currently, the timescale of the input signals that can be processed by physical reservoirs is limited by the transient characteristics inherent to the selected physical system. This study used an Sn‐doped In2O3/Nb‐doped SrTiO3 junction to fabricate a memristor that could respond to both electrical and optical stimuli. The results show that the timescale of the transient current response of the device could be controlled over several orders of magnitude simply by applying a small voltage. The computational performance of the device as a physical reservoir is evaluated in an image classification task, demonstrating that the learning accuracy could be optimized by tuning the device to exhibit appropriate transient characteristics according to the timescale of the input signals. These results are expected to provide deeper insights into the photoconductive properties of strontium titanate, as well as support the physical implementation of computing systems.
Article
Edge computing has emerged as a promising paradigm for addressing the challenges of latency, bandwidth, and energy consumption in the era of big data and the intelligent Internet of Things. However, the limited computing resources of edge devices and their vulnerability to failures pose significant challenges to the reliability and availability of edge computing systems. To this end, we propose a novel architecture for reliable edge computing that leverages the collective computing power of mobile edge devices in this article. Our architecture employs a task-oriented triple-stage monitoring mechanism to ensure system reliability. Moreover, we present a shared computing framework that allows edge devices to dynamically share their computing resources based on the current availability and workload. We evaluate the effectiveness of the proposed architecture with several computational tasks, including $\pi $ calculation and video processing. The results show that our architecture achieves high reliability and availability while also improving the performance and energy efficiency of edge devices.
Article
Full-text available
Metal-based Additive Manufacturing (AM) can realize fully dense metallic components and thus offers an opportunity to compete with conventional manufacturing based on the unique merits possible through layer-by-layer processing. Unsurprisingly, Machine Learning (ML) applications in AM technologies have been increasingly growing in the past several years. The trend is driven by the ability of data-driven techniques to support a range of AM concerns, including in-process monitoring and predictions. However, despite numerous ML applications being reported for different AM concerns, no framework exists to systematically manage these ML models for AM operations in the industry. Moreover, no guidance exists on fundamental requirements to realize such a cross-disciplinary platform. Working with experts in ML and AM, this work identifies the fundamental requirements to realize a Machine Learning Operations (MLOps) platform to support process-based ML models for industrial metal AM (MAM). Project-level activities are identified in terms of functional roles, processes, systems, operations, and interfaces. These components are discussed in detail and are linked with their respective requirements. In this regard, peer-reviewed references to identified requirements are made available. The requirements identified can help guide small and medium enterprises looking to implement ML solutions for AM in the industry. Challenges and opportunities for such a system are highlighted. The system can be expanded to include other lifecycle phases of metallic and non-metallic AM.
Conference Paper
Full-text available
Despite the broad utilization of cloud computing, some applications and services still cannot benefit from this popular computing paradigm due to inherent problems of cloud computing such as unacceptable latency, lack of mobility support and location-awareness. As a result, fog computing, has emerged as a promising infrastructure to provide elastic resources at the edge of network. In this paper, we have discussed current definitions of fog computing and similar concepts, and proposed a more comprehensive definition. We also analyzed the goals and challenges in fog computing platform, and presented platform design with several exemplar applications. We finally implemented and evaluated a prototype fog computing platform.
Article
Full-text available
In the inaugural issue of MC2R in April 1997 [24], I highlighted the seminal influence of mobility in computing. At that time, the goal of "information at your fingertips anywhere, anytime" was only a dream. Today, through relentless pursuit of innovations in wireless technology, energy-efficient portable hardware and adaptive software, we have largely attained this goal. Ubiquitous email and Web access is a reality that is experienced by millions of users worldwide through their Blackberries, iPhones, iPads, Windows Phone devices, and Android-based devices. Mobile Web-based services and location-aware advertising opportunities have emerged, triggering large commercial investments. Mobile computing has arrived as a lucrative business proposition. Looking ahead, what are the dreams that will inspire our future efforts in mobile computing? We begin this paper by considering some imaginary mobile computing scenarios from the future. We then extract the deep assumptions implicit in these scenarios, and use them to speculate on the future trajectory of mobile computing.
Article
Full-text available
We describe the architecture and prototype implementation of an assistive system based on Google Glass devices for users in cognitive decline. It combines the first-person image capture and sensing capabilities of Glass with remote processing to perform real-time scene interpretation. The system architecture is multi-tiered. It offers tight end-to-end latency bounds on compute-intensive operations, while addressing concerns such as limited battery capacity and limited processing capability of wearable devices. The system gracefully degrades services in the face of network failures and unavailability of distant architectural tiers.
Conference Paper
Full-text available
Despite the tremendous market penetration of smartphones, their utility has been and will remain severely limited by their battery life. A major source of smartphone battery drain is accessing the Internet over cellular or WiFi connection when running various apps and services. Despite much anecdotal evidence of smartphone users experiencing quicker battery drain in poor signal strength, there has been limited understanding of how often smartphone users experience poor signal strength and the quantitative impact of poor signal strength on the phone battery drain. The answers to such questions are essential for diagnosing and improving cellular network services and smartphone battery life and help to build more accurate online power models for smartphones, which are building blocks for energy profiling and optimization of smartphone apps. In this paper, we conduct the first measurement and modeling study of the impact of wireless signal strength on smartphone energy consumption. Our study makes four contributions. First, through analyzing traces collected on 3785 smartphones for at least one month, we show that poor signal strength of both 3G and WiFi is routinely experienced by smartphone users, both spatially and temporally. Second, we quantify the extra energy consumption on data transfer induced by poor wireless signal strength. Third, we develop a new power model for WiFi and 3G that incorporates the signal strength factor and significantly improves the modeling accuracy over the previous state of the art. Finally, we perform what-if analysis to quantify the potential energy savings from opportunistically delaying network traffic by exploring the dynamics of signal strength experienced by users.
Article
Full-text available
Smartphones have exploded in popularity in recent years, becoming ever more sophisticated and capable. As a result, developers worldwide are building increasingly complex applications that require ever increasing amounts of computational power and energy. In this paper we propose ThinkAir, a framework that makes it simple for developers to migrate their smartphone applications to the cloud. ThinkAir exploits the concept of smartphone virtualization in the cloud and provides method-level computation offloading. Advancing on previous work, it focuses on the elasticity and scalability of the cloud and enhances the power of mobile cloud computing by parallelizing method execution using multiple virtual machine (VM) images. We implement ThinkAir and evaluate it with a range of benchmarks starting from simple micro-benchmarks to more complex applications. First, we show that the execution time and energy consumption decrease two orders of magnitude for a N-queens puzzle application and one order of magnitude for a face detection and a virus scan application. We then show that a parallelizable application can invoke multiple VMs to execute in the cloud in a seamless and on-demand manner such as to achieve greater reduction on execution time and energy consumption. We finally use a memory-hungry image combiner tool to demonstrate that applications can dynamically request VMs with more computational power in order to meet their computational requirements.
Article
Full-text available
Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Book
Full-text available
Foreword by Peter Friess & Gérald Santuci: It goes without saying that we are very content to publish this Clusterbook and to leave it today to your hands. The Cluster of European Research projects on the Internet of Things – CERP-IoT – comprises around 30 major research initiatives, platforms and networks work-ing in the field of identification technologies such as Radio Frequency Identification and in what could become tomorrow an Internet-connected and inter-connected world of objects. The book in front of you reports to you about the research and innovation issues at stake and demonstrates approaches and examples of possible solutions. If you take a closer look you will realise that the Cluster reflects exactly the ongoing developments towards a future Internet of Things – growing use of Identification technologies, massive deployment of simple and smart devices, increasing connection between objects and systems. Of course, many developments are less directly derived from the core research area but contribute significantly in creating the “big picture” and the paradigm change. We are also conscious to maintain Europe’s strong position in these fields and the result being achieved, but at the same time to understand the challenges ahead as a global endeavour with our international partners. As it regards international co-operation, the cluster is committed to increasing the number of common activities with the existing international partners and to looking for various stakeholders in other countries. However, we are just at the beginning and, following the prognostics which predict 50 to 100 billion devices to be connected by 2020, the true research work starts now. The European Commission is decided to implement its Internet of Things policy for supporting an economic revival and providing better life to its citizens, and it has just selected from the last call for proposals several new Internet of Things research projects as part of the 7th Framework Programme on European Research. We wish you now a pleasant and enjoyable reading and would ask you to stay connected with us for the future. Special thanks are expressed to Harald Sundmaeker and his team who did a remarkable effort in assembling this Clusterbook.
Conference Paper
Full-text available
Energy efficiency is a fundamental consideration for mo-bile devices. Cloud computing has the potential to save mobile client energy but the savings from offloading the computation need to exceed the energy cost of the addi-tional communication. In this paper we provide an analysis of the critical fac-tors affecting the energy consumption of mobile clients in cloud computing. Further, we present our measure-ments about the central characteristics of contemporary mobile handheld devices that define the basic balance be-tween local and remote computing. We also describe a concrete example, which demonstrates energy savings. We show that the trade-offs are highly sensitive to the exact characteristics of the workload, data communica-tion patterns and technologies used, and discuss the im-plications for the design and engineering of energy effi-cient mobile cloud computing solutions.
Conference Paper
Full-text available
Cloud computing has seen tremendous growth, particularly for commercial web applications. The on-demand, pay-as-you-go model creates a flexible and cost-effective means to access compute resources. For these reasons, the scientific computing community has shown increasing interest in exploring cloud computing. However, the underlying implementation and performance of clouds are very different from those at traditional supercomputing centers. It is therefore critical to evaluate the performance of HPC applications in today's cloud environments to understand the tradeoffs inherent in migrating to the cloud. This work represents the most comprehensive evaluation to date comparing conventional HPC platforms to Amazon EC2, using real applications representative of the workload at a typical supercomputing center. Overall results indicate that EC2 is six times slower than a typical mid-range Linux cluster, and twenty times slower than a modern HPC system. The interconnect on the EC2 cloud platform severely limits performance and causes significant variability.
Conference Paper
Full-text available
This paper presents MAUI, a system that enables fine-grained energy-aware offload of mobile code to the infrastructure. Previous approaches to these problems either relied heavily on programmer support to partition an application, or they were coarse-grained re- quiring full process (or full VM) migration. MAUI uses the benefits of a managed code environment to offer the best of both worlds: it supports fine-grained code offload to maximize energy savings with minimal burden on the programmer. MAUI decides at run- time which methods should be remotely executed, driven by an op- timization engine that achieves the best energy savings possible un- der the mobile device's current connectivity constrains. In our eval- uation, we show that MAUI enables: 1) a resource-intensive face recognition application that consumes an order of magnitude less energy, 2) a latency-sensitive arcade game application that doubles its refresh rate, and 3) a voice-based language translation applica- tion that bypasses the limitations of the smartphone environment by executing unsupported components remotely.
Article
Full-text available
The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?
Article
Full-text available
CLOUD COMPUTING, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1,000 servers for one hour costs no more than using one server for 1,000.
Article
Full-text available
"Information at your fingertips anywhere, anytime" has been the driving vision of mobile computing for the past two decades. Through relentless pursuit of this vision, spurring innovations in wireless technology, energy-efficient portable hardware and adaptive software, we have now largely attained this goal. Ubiquitous email and Web access is a reality that is experienced by millions of users worldwide through their BlackBerries, iPhones, Windows Mobile, and other portable devices. Continuing on this road, mobile Web-based services and location-aware advertising opportunities have begun to appear, triggering large commercial investments. Mobile computing has arrived as a lucrative business proposition.
Article
Full-text available
Mobile computing continuously evolve through the sustained effort of many researchers. It seamlessly augments users' cognitive abilities via compute-intensive capabilities such as speech recognition, natural language processing, etc. By thus empowering mobile users, we could transform many areas of human activity. This article discusses the technical obstacles to these transformations and proposes a new architecture for overcoming them. In this architecture, a mobile user exploits virtual machine (VM) technology to rapidly instantiate customized service software on a nearby cloudlet and then uses that service over a wireless LAN; the mobile device typically functions as a thin client with respect to the s 5a8 ervice. A cloudlet is a trusted, resource-rich computer or cluster of computers that's well-connected to the Internet and available for use by nearby mobile devices. Our strategy of leveraging transiently customized proximate infrastructure as a mobile device moves with its user through the physical world is called cloudlet-based, resource-rich, mobile computing. Crisp interactive response, which is essential for seamless augmentation of human cognition, is easily achieved in this architecture because of the cloudlet's physical proximity and one-hop network latency. Using a cloudlet also simplifies the challenge of meeting the peak bandwidth demand of multiple users interactively generating and receiving media such as high-definition video and high-resolution images. Rapid customization of infrastructure for diverse applications emerges as a critical requirement, and our results from a proof-of-concept prototype suggest that VM technology can indeed help meet this requirement.
Article
Full-text available
The data centers used to create cloud services represent a significant investment in capital outlay and ongoing costs. Accordingly, we first examine the costs of cloud service data centers today. The cost breakdown reveals the importance of optimizing work completed per dollar invested. Unfortunately, the resources inside the data centers often operate at low utilization due to resource stranding and fragmentation. To attack this first problem, we propose (1) increasing network agility, and (2) providing appropriate incentives to shape resource consumption. Second, we note that cloud service providers are building out geo-distributed networks of data centers. Geo-diversity lowers latency to users and increases reliability in the presence of an outage taking out an entire site. However, without appropriate design and management, these geo-diverse data center networks can raise the cost of providing service. Moreover, leveraging geo-diversity requires services be designed to benefit from it. To attack this problem, we propose (1) joint optimization of network and data center resources, and (2) new systems and mechanisms for geo-distributing state.
Article
Full-text available
We describe a new approach to power saving and battery life extension on an untethered laptop through wireless remote processing of power-costly tasks. We ran a series of experiments comparing the power consumption of processes run locally with that of the same processes run remotely. We examined the trade-off between communication power expenditures and the power cost of local processing. This paper describes our methodology and results of our experiments. We suggest ways to further improve this approach, and outline a software design to support remote process execution.
Article
Full-text available
Although successive generations of middleware (such as RPC, CORBA, and DCOM) have made it easier to connect distributed programs, the process of distributed application decomposition has changed little: programmers manually divide applications into sub-programs and manually assign those subprograms to machines. Often the techniques used to choose a distribution are ad hoc and create one-time solutions biased to a specific combination of users, machines, and networks. We assert that system software, not the programmer, should manage the task of distributed decomposition. To validate our assertion we present Coign, an automatic distributed partitioning system that significantly eases the development of distributed applications. Given an application (in binary form) built from distributable COM components, Coign constructs a graph model of the application's inter-component communication through scenario-based profiling. Later, Coign applies a graph-cutting algorithm to partition the application across a network and minimize execution delay due to network communication. Using Coign, even an end user (without access to source code) can transform a non-distributed application into an optimized, distributed application. Coign has automatically distributed binaries from over 2 million lines of application code, including Mi- crosoft's PhotoDraw 2000 image processor. To our knowledge, Coign is the first system to automatically partition and distribute binary applications. 1.
Article
Cloud computing heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems. Is cloud computing the ultimate solution for extending battery lifetimes of mobile systems?
Article
The data centers used to create cloud services represent a significant investment in capital outlay and ongoing costs. Accordingly, we first examine the costs of cloud service data centers today. The cost breakdown reveals the importance of optimizing work completed per dollar invested. Unfortunately, the resources inside the data centers often operate at low utilization due to resource stranding and fragmentation. To attack this first problem, we propose (1) increasing network agility, and (2) providing appropriate incentives to shape resource consumption. Second, we note that cloud service providers are building out geo-distributed networks of data centers. Geo-diversity lowers latency to users and increases reliability in the presence of an outage taking out an entire site. However, without appropriate design and management, these geo-diverse data center networks can raise the cost of providing service. Moreover, leveraging geo-diversity requires services be designed to benefit from it. To attack this problem, we propose (1) joint optimization of network and data center resources, and (2) new systems and mechanisms for geo-distributing state.
Article
This paper presents an overview of the MobilityFirst network architecture, currently under development as part of the US National Science Foundation's Future Internet Architecture (FIA) program. The proposed architecture is intended to directly address the challenges of wireless access and mobility at scale, while also providing new services needed for emerging mobile Internet application scenarios. After briefly outlining the original design goals of the project, we provide a discussion of the main architectural concepts behind the network design, identifying key features such as separation of names from addresses, public-key based globally unique identifiers (GUIDs) for named objects, global name resolution service (GNRS) for dynamic binding of names to addresses, storage-aware routing and late binding, content- and context-aware services, optional in-network compute layer, and so on. This is followed by a brief description of the MobilityFirst protocol stack as a whole, along with an explanation of how the protocol works at end-user devices and inside network routers. Example of specific advanced services supported by the protocol stack, including multi-homing, mobility with disconnection, and content retrieval/caching are given for illustration. Further design details of two key protocol components, the GNRS name resolution service and the GSTAR routing protocol, are also described along with sample results from evaluation. In conclusion, a brief description of an ongoing multi-site experimental proof-of-concept deployment of the MobilityFirst protocol stack on the GENI testbed is provided.
Article
MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time.
Article
The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. In a large cluster, thousands of servers both host directly attached storage and execute user application tasks. By distributing storage and computation across many servers, the resource can grow with demand while remaining economical at every size. We describe the architecture of HDFS and report on experience using HDFS to manage 25 petabytes of enterprise data at Yahoo!.
Article
Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein, sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fuelled by the recent adaptation of a variety of enabling device technologies such as RFID tags and readers, near field communication (NFC) devices and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A cloud implementation using Aneka, which is based on interaction of private and public clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
Conference Paper
Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device.
Conference Paper
We have designed and implemented the Google File Sys- tem, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. While sharing many of the same goals as previous dis- tributed file systems, our design has been driven by obser- vations of our application workloads and technological envi- ronment, both current and anticipated, that reflect a marked departure from some earlier file system assumptions. This has led us to reexamine traditional choices and explore rad- ically different design points. The file system has successfully met our storage needs. It is widely deployed within Google as the storage platform for the generation and processing of data used by our ser- vice as well as research and development efforts that require large data sets. The largest cluster to date provides hun- dreds of terabytes of storage across thousands of disks on over a thousand machines, and it is concurrently accessed by hundreds of clients. In this paper, we present file system interface extensions designed to support distributed applications, discuss many aspects of our design, and report measurements from both micro-benchmarks and real world use.
Conference Paper
MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
Conference Paper
While many public cloud providers offer pay-as-you-go computing, their varying approaches to infrastructure, virtualization, and software services lead to a problem of plenty. To help customers pick a cloud that fits their needs, we develop CloudCmp, a systematic comparator of the performance and cost of cloud providers. CloudCmp measures the elastic computing, persistent storage, and networking services offered by a cloud along metrics that directly reflect their impact on the performance of customer applications. CloudCmp strives to ensure fairness, representativeness, and compliance of these measurements while limiting measurement cost. Applying CloudCmp to four cloud providers that together account for most of the cloud customers today, we find that their offered services vary widely in performance and costs, underscoring the need for thoughtful provider selection. From case studies on three representative cloud applications, we show that CloudCmp can guide customers in selecting the best-performing provider for their applications.
Conference Paper
PROFINET is the industrial Ethernet standard devised by PROFIBUS International (PI) for either modular machine and plant engineering or distributed IO. Using a plant-wide multi-vendor engineering for modular machines, commissioning time as well as costs are reduced. With distributed IO IO-controllers (e.g., PLCs) with their associated IO-devices may also be integrated into PROFINET solutions. Communication is a major part of PROFINET. Real-time communication for standard factory automation applications as well as extensions which enables motion control applications is covered in a common real-time protocol. The advantages of modular and multi-vendor engineering and distributed IO can be used even in applications with time-critical data transfer requirements.
OpenFog Architecture Overview. OpenFog Consortium Architecture Working Group
  • Openfog Architecture
  • Overview
Fog computing and its role in the internet of things
  • F Bonomi
  • R Milito
  • J Zhu
  • S Addepalli
F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, "Fog computing and its role in the internet of things," in Proceedings of the first edition of the MCC workshop on Mobile cloud computing. ACM, 2012, pp. 13-16.
The hadoop distributed file system
  • K Shvachko
  • H Kuang
  • S Radia
  • R Chansler
K. Shvachko, H. Kuang, S. Radia, and R. Chansler, "The hadoop distributed file system," in Mass Storage Systems and Technologies (MSST), 2010 IEEE 26th Symposium on. IEEE, 2010, pp. 1-10.
Spark: cluster computing with working sets
  • M Zaharia
  • M Chowdhury
  • M J Franklin
  • S Shenker
  • I Stoica
M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica, "Spark: cluster computing with working sets," in Proceedings of the 2nd USENIX conference on Hot topics in cloud computing, vol. 10, 2010, p. 10.
Towards wearable cognitive assistance
  • K Ha
  • Z Chen
  • W Hu
  • W Richter
  • P Pillai
  • M Satyanarayanan
K. Ha, Z. Chen, W. Hu, W. Richter, P. Pillai, and M. Satyanarayanan, "Towards wearable cognitive assistance," in Proceedings of the 12th annual international conference on Mobile systems, applications, and services. ACM, 2014, pp. 68-81.
PROFINET-scalable factory communication for all applications," in Factory Communication Systems
  • J Feld
J. Feld, "PROFINET-scalable factory communication for all applications," in Factory Communication Systems, 2004. Proceedings. 2004 IEEE International Workshop on. IEEE, 2004, pp. 33-38.
Characterizing and modeling the impact of wireless signal strength on smartphone battery drain
  • N Ding
  • D Wagner
  • X Chen
  • A Pathak
  • Y C Hu
  • A Rice
N. Ding, D. Wagner, X. Chen, A. Pathak, Y. C. Hu, and A. Rice, "Characterizing and modeling the impact of wireless signal strength on smartphone battery drain," SIGMETRICS Perform. Eval. Rev., vol. 41, no. 1, pp. 29-40, Jun. 2013. [Online]. Available: http://doi.acm.org/10.1145/2494232.2466586