Article

Edge Computing: Vision and Challenges

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The proliferation of Internet of Things and the success of rich cloud services have pushed the horizon of a new computing paradigm, Edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of Edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative Edge to materialize the concept of Edge computing. Finally, we present several challenges and opportunities in the field of Edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Unfortunately, however, the nonnegligible latency due to data transmission has become a bottleneck for real-time inspection applications. To overcome this challenge, the newly developed edge computing paradigm [22] appears to offer a good solution for smart industrial scenarios. We specify our basic assumptions as follows: Product defects are all caused by damage or abrasion either during the manufacturing process or during service. ...
... As a rapidly developing technology, edge computing [22] was proposed to enable the offloading of computing tasks to the edge of a communication network topology instead of transferring the data to the backend cloud. In the context of the Industrial Internet of Things (IIOT), edge intelligent manufacturing technologies have been developed for efficient computation and the continuous execution of manufacturing instructions. ...
... A m3 (I, I rec )=RM (I, I rec ) = I − I rec (22) Finally, the final multimodal inspection result is obtained by fusing the above anomaly maps using the dot product operation, as shown in the following equation: ...
Preprint
Recent advances in the industrial inspection of textured surfaces-in the form of visual inspection-have made such inspections possible for efficient, flexible manufacturing systems. We propose an unsupervised feature memory rearrangement network (FMR-Net) to accurately detect various textural defects simultaneously. Consistent with mainstream methods, we adopt the idea of background reconstruction; however, we innovatively utilize artificial synthetic defects to enable the model to recognize anomalies, while traditional wisdom relies only on defect-free samples. First, we employ an encoding module to obtain multiscale features of the textured surface. Subsequently, a contrastive-learning-based memory feature module (CMFM) is proposed to obtain discriminative representations and construct a normal feature memory bank in the latent space, which can be employed as a substitute for defects and fast anomaly scores at the patch level. Next, a novel global feature rearrangement module (GFRM) is proposed to further suppress the reconstruction of residual defects. Finally, a decoding module utilizes the restored features to reconstruct the normal texture background. In addition, to improve inspection performance, a two-phase training strategy is utilized for accurate defect restoration refinement, and we exploit a multimodal inspection method to achieve noise-robust defect localization. We verify our method through extensive experiments and test its practical deployment in collaborative edge--cloud intelligent manufacturing scenarios by means of a multilevel detection method, demonstrating that FMR-Net exhibits state-of-the-art inspection accuracy and shows great potential for use in edge-computing-enabled smart industries.
... It is usually done so to improve response times and use less network resources as well as resources of other devices around by doing local processing at the edge node (device). For example, a smart phone or a tablet PC is an edge device [19]. Our contributions: In order to overcome the afore mentioned shortcomings (elaborated in more details in Section II), we propose a novel secure feature selection (filtering) protocol based on information theoretic metrics such as entropy. ...
... TACS sends _ to EDS for decryption, while sending the randomization vector ℎ2_ to each EDO (lines[16][17][18]. In Phase IIC EDS decrypts _ , which looks like_ = {(( ( ) − ) × + ℎ 1, ) + ℎ 2,19), and sends it to each EDO (lines[19][20][21][22]. and are not part of the ℎ set which has | ′| elements (words) (line 25).PROTOCOL 2: secFS-S1 (secure Feat. Selection -Stage I) INPUT: : the local datasets of EDOs 1 ≤ ≤ : minimum allowed global document appearance for each word _ : a vector to permute bit hashes at EDOs ( ( _2 ), ( _2 ), ( _1 ),//EDO k's 3 rd block containing the number of local ham and spam mails (documents), respectively, replicated for | ′| times. ...
... TACS sends _ to EDS for decryption, while sending the randomization vector ℎ2_ to each EDO (lines[16][17][18]. In Phase IIC EDS decrypts _ , which looks like_ = {(( ( ) − ) × + ℎ 1, ) + ℎ 2,19), and sends it to each EDO (lines[19][20][21][22]. and are not part of the ℎ set which has | ′| elements (words) (line 25).PROTOCOL 2: secFS-S1 (secure Feat. ...
Preprint
Full-text available
We tackle the problem of secure feature selection by homomorphically evaluating features' information gains over horizontally partitioned private datasets owned by edge IoT (Internet of Things) devices. In the process we use a powerful cloud server to do the bulk of the costly homomorphic aggregations. We proceeded with secure training (learning) and classification over the selected features in the same environmental settings (context). In the process, the participants interact with each other under strict security, privacy and efficiency requirements. To this ends, to each participant's interaction we provide confidentiality, integrity and authenticity (CIA) by signing its hashed contents with the corresponding participant's private key. We assure the consistency among interactions by introducing timestamps and linking them with the hashed content(s) of the preceding interaction(s). This makes our protocols a natural fit for blockchain technology. Extensive experimental evaluations over benchmark datasets give an advantage to our secure protocols ranging from several times to orders of magnitudes w.r.t to the state of the art in terms of computation and communication costs. Furthermore, our schemes are among the best in literature in terms of security and privacy properties as well as they show high rate of fault tolerance and resistance to collusion attacks among edge IoT dataset owners.
... Edge computing (EC) is a new category of cloud computing services that aim to reduce service latency and is a promising innovation for new administrations such as the Internet of Things, augmented reality, effective local content distribution, and data caching [1,2]. Extensively, the well-known Fog computing [3], Cloudlet, and Multi-Access Edge Computing (MEC) [4] are the different types of EC in their territories. ...
... In this section, the admission cost of every SFC request is prioritized reliant on the support value of cost. Here, the Support value calculation estimation is subject to the extricated cost value of every request as signified in condition (2). ...
... The support value-based graph is generated using algorithm 1 support value-based graph creation. Using condition, each multicast request's support value is determined from the start (2). The measured support value is used as a threshold for multicast request prioritizing. ...
Article
Full-text available
The mobile edge cloud has developed as the main platform to offer low latency network services from the edge of networks for stringent delay necessities of mobile applications. In mobile edge cloud networks, network functions virtualization (NFV) creates the frameworks for building up a new dynamic resource management framework structure to effectively utilize network resources. Delay tolerance NFV-enabled multicast request admissions in a mobile edge-cloud network are explored in this paper to limit request admission delays or maximizing system performance for a group of requests arriving individually. At first, for the cost reduction issue of a single NFV-empowered multicast request admission, the admission cost of each multicast request is assessed, and the Support based graph is constructed. Here, the multicast requests are prioritized dependent on their admission cost. Subsequently, trust and the delay-based local gradient are assessed for the prioritized multicast requests. At long last, delay tolerance NFV multicasting is accomplished by successful (First Come First Serve) FCFS queuing reliant on the assessed local gradient of requests. When compared to existing approaches, the exploratory results show that the proposed methodology is superior in terms of throughput, admission cost, and running time.
... Raw data generated by IoT devices are moved across the network paths in order to be processed in remote data centers. This physical remoteness between data sources and processing mechanisms, however, imposes significant obstacles including an increased latency, limited data processing control, unnecessary resource consumption, safety and privacy vulnerabilities [24], [35]. As a consequence, difficulties arise in maintaining the desired levels of Quality of Service (QoS) as mandated by every application. ...
... Edge Computing (EC) comes into the scene as a promising paradigm to develop computation and analytics capabilities in the network edge devices and alleviate the aforementioned problems. The keystone of the EC is that the tremendous quantity of data is suitably processed close to their source, evolving the edge nodes to knowledge producers except for data consumers [24]. One can envision an ecosystem of EC nodes where processing may take place upon multiple distributed datasets. ...
... The core research challenge is related to the selection of tasks that should be offloaded finding the best possible peer node that will host their execution. The ultimate goal is to keep the execution of tasks at the EC as it can reduce the network traffic [24] driving data analytics towards geo-distributed processing (known as edge analytics) [23], [21], [31]. Evidently, the dynamic environment where EC nodes act imposes various constraints and limitations in the decision making for selecting the tasks that should be offloaded to peer nodes. ...
Preprint
Full-text available
The advent of Edge Computing (EC) as a promising paradigm that provides multiple computation and analytics capabilities close to data sources opens new pathways for novel applications. Nonetheless, the limited computational capabilities of EC nodes and the expectation of ensuring high levels of QoS during tasks execution impose strict requirements for innovative management approaches. Motivated by the need of maintaining a minimum level of QoS during EC nodes functioning, we elaborate a distributed and intelligent decision-making approach for tasks scheduling. Our aim is to enhance the behavior of EC nodes making them capable of securing high QoS levels. We propose that nodes continuously monitor QoS levels and systematically evaluate the probability of violating them to proactively decide some tasks to be offloaded to peer nodes or Cloud. We present, describe and evaluate the proposed scheme through multiple experimental scenarios revealing its performance and the benefits of the envisioned monitoring mechanism when serving processing requests in very dynamic environments like the EC.
... By performing data processing at the edge of networks, several shortcomings of cloud computing, such as long latency and network congestion, can be effectively addressed [6]- [8]. Notably, edge computing is an appealing technology to perform real-time tasks and make real-time decisions by exploiting the abundant computational resources of the edge servers [9]- [11]. Nevertheless, the bandwidth limitations and resource constraints of the wireless channels can pose significant challenges to realizing fast learning [12]- [14]. ...
... Although the scheduling constraint is different from the conventional least-squares constraint in sparse recovery, similar greedy approaches can be developed for this problem. 9 That is minimized over all linear receivers c. ...
Preprint
This paper develops a class of low-complexity device scheduling algorithms for over-the-air federated learning via the method of matching pursuit. The proposed scheme tracks closely the close-to-optimal performance achieved by difference-of-convex programming, and outperforms significantly the well-known benchmark algorithms based on convex relaxation. Compared to the state-of-the-art, the proposed scheme poses a drastically lower computational load on the system: For $K$ devices and $N$ antennas at the parameter server, the benchmark complexity scales with $\left(N^2+K\right)^3 + N^6$ while the complexity of the proposed scheme scales with $K^p N^q$ for some $0 < p,q \leq 2$. The efficiency of the proposed scheme is confirmed via numerical experiments on the CIFAR-10 dataset.
... Edge intelligence (EI), as a complementary processing architecture by combining edge computing (EC) [4] and AI, pushes the AI frontier from the cloud to the network edge to open the path for low-latency and critical-computation [5]. Specifically, EI is a burgeoning paradigm integrating network, computing, storage and AI, while providing EI services and satisfying the critical requirements of the Internet era in agile connection, real-time business, data optimization, application intelligence, security and privacy protection, etc. Notably, the celebrated Gartner hype cycle has regarded EI as an emerging technology that will enter a stationary phase in the following 5 to 10 years [6]. ...
... How to manage the heterogeneous computingpower resources to adapt to the diversification of customers' demands raises significant concern. ii) Due to the physical constraints [4], edge nodes could not support the powerhungry or computation-intensive AI services. It is of great interest to focus on how to integrate the dispersed resource to provide a series of computing solutions for the resourceconstrained computing platforms. ...
Article
Full-text available
Driven by an unprecedented boom in artificial intelligence (AI) and Internet of Things (IoT), edge intelligence (EI) pushes the frontier of AI from cloud to network edge, serving as a remarkable solution that unlocks the full potential of AI services. It is yet facing critical challenges in its decentralized management and security, limiting its capabilities to support services with numerous requirements. In this context, blockchain (BC) has been seen as a promising solution to tackle the above issues, and further support EI. Based on the number of citations or the relevance of emerging methods, this paper presents the results of a literature survey on the integration of EI and BC. Accordingly, we summarize the recent research efforts reported in the existing works on EI and BC. We then paint a comprehensive picture of the limitations of EI and why BC could benefit from EI. From there, we explore how BC benefits EI in terms of computing power management, data administration, and model optimization. In order to narrow the gap between immature BC and EI-amicable BC, we also probe into how to tailor BC to EI from four perspectives, including flexible consensus protocol, effective incentive, intellectuality smart contract, and scalability. Finally, some research challenges and future directions are presented. Different from existing surveys, our work focuses on the integration of EI and BC, develops some general models to help the reader build relevant optimization models in the integrated system, as well as provides detailed tutorials on implementation. We anticipate that this survey will motivate further discussions on the synergy of EI and BC, and offer some guidance in EI, BC, future networks, and other areas.
... To address the shortcomings of the cloud model, researchers have proposed edge computing [9,10]. As a novel computing paradigm, edge computing can combine the resources of multiple devices at the edge of the network to provide task processing for IoT applications [11,12]. ...
... In addition, each task needs to be assigned to edge devices with different available resources, e.g., CPU, memory, storage, bandwidth, etc. e resources and private data required for each task are shown in Table 1. ere are 10 available edge devices (d 0 -d 9 ) in the demonstration scenario, and the available resources of each device are shown in Table 2. ...
Article
Full-text available
To meet the rapidly increasing demand for Internet of Things (IoT) applications, edge computing, as a novel computing paradigm, can combine devices at the edge of the network to collaboratively provide computing resources for IoT applications. However, the dynamic, heterogeneous, distributed, and resource-constrained nature of the edge computing paradigm also brings some problems, such as more serious privacy leakages and performance bottlenecks. Therefore, how to ensure that the resource requirements of the application are satisfied, while enhancing the protection of user privacy as much as possible, is a challenge for the task assignment of IoT applications. Aiming to address this challenge, we propose a privacy-aware IoT task assignment approach at the edge of the network. Firstly, we model the resource and privacy requirements for IoT applications and evaluate the resource satisfaction and privacy compatibility between edge devices and tasks. Secondly, we formulate the problem of privacy-aware IoT task assignment on edge devices (PITAE) and develop two solutions to the PITAE problem based on the greedy search algorithm and the Kuhn–Munkres (KM) algorithm. Finally, we conduct a series of simulation experiments to evaluate the proposed approach. The experimental results show that the PITAE problem can be solved effectively and efficiently.
... Big data processing in IIoT networks thus includes intelligent modeling that can be performed using deep learning techniques [109]. In intelligent manufacturing, data modeling, labeling, and analysis play an important role [110]. Deep learning methods have emerged from automatic learning from the supplied data, finding patterns, and making correct decisions. ...
... Although several issues must be dealt with properly to make the integration of edge with IIoT more apparent, one of the issues listed in [110] is the programmability of edge devices. The issue of programmability is a significant difference in versatility between cloud platforms and edge tools that must be bridged. ...
Article
Full-text available
Industry 4.0 relates to the digital revolution of manufacturing and other sectors, such as retail, distribution, oil and gas, and infrastructure. Meanwhile, the Industrial Internet of Things (IIoT) is a technological advancement that leads to Industry 4.0 implementation by boosting the manufacturing sector’s productivity and economic impact. IIoT provides the ability to provide global connectivity between components in different locations. The manufacturing sector has had various difficulties implementing IIoT, primarily due to IIoT characteristics. This paper offers an in-depth review of Industry 4.0 and IIoT, where the primary motivation behind this is to introduce the most recent advancements related to Industry 4.0 and IIoT, as well as to address the existing limitations. Firstly, this paper presents a novel taxonomy of IIoT challenges that includes aspects of each challenge, such as the terminology and approaches utilized to solve these challenges. Besides IIoT challenges, this survey provides an in-depth demonstration of the many concepts related to IIoT, such as architecture and use cases. Secondly, this paper provides a comprehensive review of the state-of-the-art of Industry 4.0 in terms of concepts, requirements, and supporting technology. In addition, the correlation between enabling technology and technical requirements is discussed in detail. Finally, this paper highlights deep learning, edge computing, and big data as key techniques for the future directions of IIoT. Furthermore, the presented techniques are thoroughly examined to present an alternative method for future adoption. In addition to the showcased techniques, a new architecture for the future of IIoT based on these three primary techniques is also proposed.
... Research literature shows that data often can be significantly large (e.g., image, audio, video streams, 3D content, etc.) and usually requires to be processed with low latency [1]. Nevertheless, the massive amount of data streams, heterogeneous devices, and networks involved causes high traffic and affects the overall latency [2]. ...
... AI Chips such as general-purpose chips (GPUs), semi-customized chips (FGPAs) and fully-customized chips (ASICs) are becoming readily available across many applications (Blanco-Filgueira et al., 2019;Rahman and Hossain, 2021;Zhu et al., 2021). Such AI chips are able to process a vast amount of data locally and provide timely responses and decisions (Shi et al., 2016). ...
Preprint
As edge devices become increasingly powerful, data analytics are gradually moving from a centralized to a decentralized regime where edge compute resources are exploited to process more of the data locally. This regime of analytics is coined as federated data analytics (FDA). In spite of the recent success stories of FDA, most literature focuses exclusively on deep neural networks. In this work, we take a step back to develop an FDA treatment for one of the most fundamental statistical models: linear regression. Our treatment is built upon hierarchical modeling that allows borrowing strength across multiple groups. To this end, we propose two federated hierarchical model structures that provide a shared representation across devices to facilitate information sharing. Notably, our proposed frameworks are capable of providing uncertainty quantification, variable selection, hypothesis testing and fast adaptation to new unseen data. We validate our methods on a range of real-life applications including condition monitoring for aircraft engines. The results show that our FDA treatment for linear models can serve as a competing benchmark model for future development of federated algorithms.
... Light but effective intrusion detection methods should also be improved. 82,[99][100][101] Machine learning at the edge nodes: IoT applications require fast processing and decision-making to bring the data processing process closer to the user. As a result, data must be sent to the cloud to have high bandwidth and time. ...
Article
Full-text available
The Internet of Things (IoT) is expected to connect devices with unique identifiers over a network to create an equilibrium system with high speeds and volumes of data while presenting an interoperability challenge. The IoT data management system is indispensable for attaining effective and efficient performance because IoT sensors generate and collect large amounts of data used to express large data sets. IoT data management has been analyzed from various perspectives in numerous studies. In this study, a Systematic Literature Review (SLR) method was used to investigate the various topics and key areas that have recently emerged in IoT data management. This study aims to classify and evaluate studies published between 2015 and 2021 in IoT data management. Therefore, the classification of studies includes five categories, data processing, data smartness application, data collection, data security, and data storage. Then, studies in each field are compared based on the proposed classification. Each study investigates novel findings, simulation/implementation, data set, application domain, experimental results, advantages, and disadvantages. In addition, the criteria for evaluating selected articles for each domain of IoT data management are examined. Big data accounts for the highest percentage of data processing fields in IoT data management, at 34%. In addition, fast data processing, distributed data, artificial intelligence data with 22%, and data uncertainty analysis account for 11% of the data processing field. Finally, studies highlight the challenges of IoT data management and its future directions.
... In recent years, researchers have begun to study the combination of edge computing and power systems (Okay and Ozdemir, 2016;Shi et al., 2016;Li et al., 2018;Sun et al., 2019). Reference (Jiang et al., 2013) presented a cloud/edge collaboration architecture designed for advanced measurement systems, which stores and analyses power status data (e.g., voltage, current, phase angle) through a three-layer configuration (edge device-fogcloud), in which fog-layer devices implement all or part of the processing features, thereby effectively enhancing the computational performance of analytical processing of power system data. ...
Article
Internet of things of cloud computing offers high-performance computing, storage and networking services, but there are still suffers from a high transmission and processing latency, poor scalability and other problems. Internet of things of edge computing can better meet the increasing requirements of electricity consumers for service quality, especially the increasingly stringent need for low delay. On the other hand, edge intelligent network technology can offers edge smart sensing while significantly improve the efficiency of task execution, but it will lead to a massive collaborative task scheduling optimization problem. In order to solve this problem, This paper studies an ubiquitous power internet of things (UPIoT) smart sensing network edge computing model and an improved multi node cluster cooperative scheduling optimization strategy. The cluster server is added to the edge aware computing network, and an improved low delay edge task collaborative scheduling algorithm (LLETCS) is designed by using the vertical cooperation and multi node cluster collaborative computing scheme between edge aware networks. Then the problem is transformed based on linear reconstruction technology, and a parallel optimization framework for solving the problem is proposed. The simulation results suggest that the proposed scheme can more effectively reduce the UPIoT edge computing latency, and improve the quality of service in UPIoT smart sensing networks.
... Edge computing nodes should be considered in the future. 226 Evidently, algorithms are optimized for efficiency rather than humanity. We must ensure that algorithms represent human values and principles as they govern the future. ...
Article
Self-powered sensing systems augmented with machine learning (ML) represent a path toward the large-scale deployment of the internet of things (IoT). With autonomous energy-harvesting techniques, intelligent systems can continuously generate data and process them to make informed decisions. The development of self-powered intelligent sensing systems will revolutionize the design and fabrication of sensors and pave the way for intelligent robots, digital health, and sustainable energy. However, challenges remain regarding stable power harvesting, seamless integration of ML, privacy, and ethical implications. In this review, we first present three self-powering principles for sensors and systems, including triboelectric, piezoelectric, and pyroelectric mechanisms. Then, we discuss the recent progress in applied ML techniques on self-powered sensors followed by a new paradigm of self-powered sensing systems with learning capability and their applications in different sectors. Finally, we share our outlook of potential research needs and challenges presented in ML-enabled self-powered sensing systems and conclude with a road map for future directions.
... Since IoT devices can produce a massive amount of data that needs to be handled even on the edge node [4], [5], the edge computing paradigm is emerging to tackle the challenge. Edge devices include embedded systems that translates into several design challenges imposed by the unique features of these systems such as limited energy and resources, and the low computational capacity [6]. ...
Article
Full-text available
Internet of Things (IoT) as an area of tremendous impact, potential, and growth has emerged with the advent of smart homes, smart cities, and smart everything. In the age of IoT, edge devices have taken on greater importance, driving the need for more intelligence and advanced services at the network edge. Since edge devices are limited in compute, storage, and network capacity, they are easy to compromise, more exposed to attackers, and more vulnerable than standard computing systems such as PCs and servers. While several software-based countermeasures have been proposed, they require high computing power and resources that edge devices do not possess. Moreover, modern threats have become more complex and severe, requiring a level of security that can only be provided at the hardware level. In this paper, we realize an efficient hardware-assisted attack detection mechanism for edge devices through effective High Level Synthesize (HLS) optimization techniques. To this end, we propose OptiEdge, a machine learning-guided hardware-assisted resource and timing estimation tool that can effectively reduce the design space exploration for edge devices' design. This is achieved by analyzing suitability of different machine learning algorithms used for detection and how they affect hardware implementation overheads. By providing a comprehensive analysis of the accuracy, performance, and hardware efficiency of different ML algorithms, our work can assist researchers investigating this field of cybersecurity.
... N the era of artificial intelligence (AI), deploying deep learning tasks in the Internet of Things (IoT) devices such as smartphones, drones, and self-driving car, not only greatly improves life quality and raises production efficiency but also effectively increase the ability of cognition and understanding of real-world [1][2][3]. Vast amounts of data are generated at the IoT edge devices waiting to be uploaded to the cloud for further processing, which stretches the communication bandwidth, storage capacity, latency, energy consumption, security, and privacy of the edge devices [4]. Many pieces of research have shown that, compared to uploading to the cloud, deploying hardware computing facilities that can execute AI programs in IoT devices eliminates the mutual flow of data from the edge to the center, and further addresses these stretches [5]. ...
Article
Full-text available
Recently, analog in-memory computing (IMC) systems exhibit the considerable potential to break through the inherent high computational latency and energy cost of Von Neumann’s computer architecture. However, inefficient data convertor will inhibit the performance improvement of this system. The tradeoff between different data conversion circuit technologies has turned into one of the major driving forces for the analog IMC system-level improvements. The primary contribution is in two aspects. First, this article shows a digital-to-time-to-analog converter (DTAC) with the tradeoff of latency, area, and power consumption compared to a digital-to-time converter (DTC) and digital-to-analog converter (DAC). Second, we develop an innovative reconfigurable joint-quantization nonlinear analog-to-digital convertor (JQNL-ADC) architecture with lower quantization error by merging the two paradigms of uniform input quantization and uniform output quantization. Compared to conventional DAC, DTAC can reduce power and area by 50 × and 3 × , respectively. Compared to SAR-ADC, our JQNL-ADC can reduce area and power by 1.6 × and 2 × , respectively. In an example of ReRAM-based reconfigurable function-IMC (RFIMC) macro with 256-kb memory, our design can reach 112.9 TOPS/W@8bIN-8bW-8bO under the 28-nm process conditions.
... In a study conducted by Shi et al. [15], with the assistance of edge computing, intelligence in data mining moves to the edge. There are various scenarios where high-speed data is the key component of analytics, which assists in processing data with edge computing [16]. ...
Article
Full-text available
The research study intends to understand the thematic dynamics of the internet of things (IoT), thereby aiming to address the general objective i.e. "To explore and streamline the IoT thematic dynamics with a focus on cross-cutting data mining, and IoT apps evidence-based publication trends". To meet this objective, secondary research has been compiled as part of the analytic process. It was found from the research that IoT continues to evolve with significant degrees of proliferation. Complementary and trailblazing data mining (DM) with more access to cloud computing platforms has catalyzed accelerating the achievement of planned technological innovations. The outcome has been myriads of apps currently used in different thematic landscapes. Based on available data on app searches by users, and between 2016 and 2019, themes like sports, supply chain, and agriculture maintained positive trends over the four years. The emerging Internet of Nano-Things was found to be beneficial in many sectors. Wireless Sensor Networks (WSNs) were also found to be emerging with more accurate and effective results in gathering information along with processing data and communication technologies. In summary, available data indicate that IoT is happening and has a significant implication on data mining. All indications suggest that it will continue to grow and increasingly affect how we interact with "things". A backdrop of concerns exists ranging from developing standard protocols to protecting individual privacy.
... When optimizing the communication network [22], we consider latency, reliability, velocity, and bandwidth. Next to rather conventional networks (e.g., LAN, (SD-) WAN, W-LAN, Wi-Fi etc.) concepts of mobile cloud-and edge computing (MCC, MEC) have emerged [23,24]. The most relevant, fog computing is a distributed computing paradigm that acts as a layer in between cloud datacenters and devices/sensors in IoT networks. ...
Conference Paper
Full-text available
Data about the urban transportation system is specific, has high variety, and includes security-critical attributes, as well as personal data. Transportation-related operational and development decisions are complex and require high amount of data from various sources. Data is collected and generated by multiple standalone organizations, between which data sharing is not sufficient. The potential of cloud-based data storage and computing have been recognized; however, the high complexity and variety of transport data requires new design methods. We elaborate a transport specific cloud architecture, which is a combination of hybrid-, mobile-and edge cloud computing. The hybrid cloud architecture enables a cross-organizational IT integration, improving communication, data-sharing, and cooperation between transport organizations. The edge-and cloud computing architecture induces the IT integration of edge devices. We identified two major groups of edge devices, and two application fields respectively. First, vehicular networks are considered, where edge devices are vehicles and infrastructural elements, e.g., sensors. Second, mobile devices are examined, where mobile networks enable internet-of-things concepts. The spread of 5G networks is also facilitated by the application of the elaborated model, as integrated edge devices and edge-cloud communication require a scalable, fast, reliable and high bandwidth communication system.
... In the area of the glacier, IoT enabled edge devices to empower to predict the possible incidents in a short response time and update respective authorities to respond accordingly for minimizing the damage and loss [22]. The edge devices are powered with intelligent energy algorithm, so that the edge devices can efficiently use and manage the energy for the computation of the data and also to assist to enhance the energy efficiency of the deployed edge devices. ...
Article
Full-text available
The United Nations is deeply concerned about global warming and its impacts on natural re- sources. Simultaneously, it has been recommended that cutting-edge technologies be employed to predict the impact of climate change on natural reservoirs such as glaciers. With the motivation of the above facts, this study investigates the impact and significance of emerging and cutting-edge technologies like Remote Sensing, the Internet of Things (IoT), Artificial Intelligence (AI), Un- manned Aerial Vehicles (UAVs), and robots implementation for the digitalization in glaciers. The study identified that convergence of AI, Machine Learning (ML), and Deep Learning approaches with Spatio-temporal data empowers to detect non-linear characteristics, especially in high mountainous regions due to their diversity and unpredictable nature. The article suggested valuable recommendations such as establishing an intelligent eco-system in glaciers, low-cost intelligent IoT devices with intelligent energy algorithms, ML empowered edge devices, glacier-resistant rescue robots, and Wearable IoT-based safety guide devices.
... MEC [7][8][9] technology is developing rapidly in recent years,which is a very promising technology [10][11][12]. Computing task offloading is one of the key technologies of MEC [13][14][15][16]. ...
Article
Full-text available
As a new technology, Internet of Vehicles (IoV) needs high bandwidth and low delay. However, the current on-board mobile terminal equipment cannot meet the needs of the IoV. Therefore, using mobile edge computing (MEC) can solve the problems of energy consumption and time delay in the IoV. In the MEC, task offloading can solve the problem of resource constraint on mobile devices effectively, but it is not optimal to offload all tasks to edge servers. In this paper, the vehicle computation task is regarded as a directed acyclic graph (DAG), and task nodes’ execution location and scheduling order are optimized. Considering the energy consumption and delay of the system, the vehicle computation offloading is considered as a constrained multi-objective optimization problem (CMOP), and then a Non-dominated Sorting Genetic Strategy(NSGS) is proposed to solve the CMOP. The proposed algorithm can realize local and edge parallel processing to reduce delay and energy consumption. Finally, a large number of experiments are carried to prove the performance of the algorithm. The experimental results show that the algorithm can make the optimal decision in practical applications.
... There are various designed devices that are already using such systems such as smart TV, refrigerators, air conditions, washing machines etc. Hence, edge computing is really helpful in implementation of such smart hope ideas using artificial intelligence algorithms [105]. Therefore, vision of 6G networks will provide a strong base for the implementation and execution of various smart services. ...
... Edge computing can reduce the computing pressure of the cloud servers and the total delay of communication and monitoring [10]. The combination of edge computing and intelligence, i.e., edge intelligence, can allow the sensing data of power system to be processed and analyzed locally and intelligently, which effectively reduces the delay and saves bandwidth resources. ...
Preprint
Full-text available
Smart grid plays a crucial role for the smart society and the upcoming carbon neutral society. Achieving autonomous smart grid fault detection is critical for smart grid system state awareness, maintenance and operation. This paper focuses on fault monitoring in smart grid and discusses the inherent technical challenges and solutions. In particular, we first present the basic principles of smart grid fault detection. Then, we explain the new requirements for autonomous smart grid fault detection, the technical challenges and their possible solutions. A case study is introduced, as a preliminary study for autonomous smart grid fault detection. In addition, we highlight relevant directions for future research.
... Its main sponsor, the European Telecommunications Union organization (ETSI), has defined different categories, namely mobile edge computing (MEC) and multi-access edge computing (MEC) for mobile and heterogeneous networks, respectively, and as research progresses, heterogeneous access networks are expected to be extended to non-3GPP networks and other wired networks [10]. Meanwhile, scholars at home and abroad have given the definition of MEC mainly from three levels: data flow [11], network location [12], and cloud computing evolution [13], with different descriptive perspectives but consistent connotations. Computational offloading is one of the core aspects and key technologies for MEC to realize service triage. ...
Article
Full-text available
To improve the contradiction between the surge of business demand and the limited resources of MEC, firstly, the “cloud, fog, edge, and end” collaborative architecture is constructed with the scenario of smart campus, and the optimization model of joint computation offloading and resource allocation is proposed with the objective of minimizing the weighted sum of delay and energy consumption. Second, to improve the convergence of the algorithm and the ability to jump out of the bureau of excellence, chaos theory and adaptive mechanism are introduced, and the update method of teaching and learning optimization (TLBO) algorithm is integrated, and the chaos teaching particle swarm optimization (CTLPSO) algorithm is proposed, and its advantages are verified by comparing with existing improved algorithms. Finally, the offloading success rate advantage is significant when the number of tasks in the model exceeds 50, the system optimization effect is significant when the number of tasks exceeds 60, the model iterates about 100 times to converge to the optimal solution, the proposed architecture can effectively alleviate the problem of limited MEC resources, the proposed algorithm has obvious advantages in convergence, stability, and complexity, and the optimization strategy can improve the offloading success rate and reduce the total system overhead.
... e emergence of the IoT and the rise of wealthy cloud providers have brought a new model for computing and edge computing, which demands the management of data at the networking edge [65]. Edge computing will resolve issues related to the time required for response, battery life constraints, cost savings in bandwidth, and data security and privacy. ...
Article
Full-text available
According to sustainable development goals, the construction industry is one of the vital industries that can build resilient and sustainable infrastructure for human settlements. As the traditional approaches in the construction industry are causing distinct challenges including environmental pollution and excess energy usage, however, the integration of emerging technologies will assist us to reduce the impact and also enhance the activities in the construction industry. Motivated by the facts, this study aims to address the significance of automation in the construction industry with distinct emerging technologies like the Internet of things (IoT), automation, radio frequency identification (RFID), building information modeling (BIM), augmented reality (AR), and virtual reality (VR). The large amount of data generated from the IoT, RFID, BIM, and AR/VR provided an opportunity for big data and artificial intelligence (AI) to extract meaningful insights related to the events in the construction industry. Furthermore, edge and fog computing technology encourages us to implement AI at the edge network for analytics at the end of the edge device. Based on the above analysis, the article discussed recommendations that could assist in further enhancement and implementation of automation in the construction industry. Cloud-assisted AR/VR, integration of AI with IoT infrastructure, 4D printing, adopting blockchain in the construction industry, and smart robotics are the recommendation addressed in this article.
Article
Full-text available
Industry 4.0 corresponds to the Fourth Industrial Revolution, resulting from technological innovation and research multidisciplinary advances. Researchers aim to contribute to the digital transformation of the manufacturing ecosystem both in theory and mainly in practice by identifying the real problems that the industry faces. Researchers focus on providing practical solutions using technologies such as the Industrial Internet of Things (IoT), Artificial Intelligence (AI), and Edge Computing (EC). On the other hand, universities educate young engineers and researchers by formulating a curriculum that prepares graduates for the industrial market. This research aimed to investigate and identify the industry’s current problems and needs from an educational perspective. The research methodology is based on preparing a focused questionnaire resulting from an extensive recent literature review used to interview representatives from 70 enterprises operating in 25 countries. The produced empirical data revealed (1) the kind of data and business management systems that companies have implemented to advance the digitalization of their processes, (2) the industries’ main problems and what technologies (could be) implemented to address them, and (3) what are the primary industrial needs and how they can be met to facilitate their digitization. The main conclusion is that there is a need to develop a taxonomy that shall include industrial problems and their technological solutions. Moreover, the educational needs of engineers and researchers with current knowledge and advanced skills were underlined.
Article
It is well known that power plants worldwide present access to difficult and hazardous environments, which may cause harm to on-site employees. The remote and autonomous operations in such places are currently increasing with the aid of technology improvements in communications and processing hardware. Virtual and augmented reality provide applications for crew training and remote monitoring, which also rely on 3D environment reconstruction techniques with near real-time requirements for environment inspection. Nowadays, most techniques rely on offline data processing, heavy computation algorithms, or mobile robots, which can be dangerous in confined environments. Other solutions rely on robots, edge computing, and post-processing algorithms, constraining scalability, and near real-time requirements. This work uses an edge-fog computing architecture for data and processing offload applied to a 3D reconstruction problem, where the robots are at the edge and computer nodes at the fog. The sequential processes are parallelized and layered, leading to a highly scalable approach. The architecture is analyzed against a traditional edge computing approach. Both are implemented in our scanning robots mounted in a real power plant. The 5G network application is presented along with a brief discussion on how this technology can benefit and allow the overall distributed processing. Unlike other works, we present real data for more than one proposed robot working in parallel on site, exploring hardware processing capabilities and the local Wi-Fi network characteristics. We also conclude with the required scenario for the remote monitoring to take place with a private 5G network.
Conference Paper
Non-invasive biometric methods such as facial recognition reduce the risks and difficulty that come with handling confidential biometric data. Also, it makes the task of providing security simpler while ensuring accurate results simultaneously. Integrating biometric systems with Edge Computing and Deep Learning, makes the system more robust and dynamic by reducing latency and bandwidth usage. The purpose of this paper is to present an optimal facial recognition model suitable for a wide range of applications. The system uses HOG descriptors combined with Deep Learning to identify a person from a custom database. The facial recognition system is hosted on low-power devices with the help of an Internet-of-Things (IoT) network, making it entirely edge-based. A standard Message Queue Telemetry Transport (MQTT) protocol hosted over a local Wi-Fi network is used to facilitate communication between devices in the network. An accuracy of 98.33% has been achieved.
Article
Nowadays, as the smart campus concept becomes a reality, virtual reality (VR) applications are being applied to connect students with the virtual teaching world via VR devices (VDs), enhancing learning efficiency. Nevertheless, VR applications are latency‐sensitive while VDs are subjected to shortcomings to handle many VR applications simultaneously. Fortunately, mobile edge computing (MEC) has been recognized as a promising solution that can bring abundant resources to VDs to relieve hardware limits. However, the computing resources of edge nodes in MEC are limited, and thus how to allocate resources effectively is critical. Meanwhile, some sensitive information of students collected by VD needs to be protected. In view of this, we investigate computation offloading for educational VR applications in MEC‐enabled smart campus. The aims of this issue are to optimize the motion‐to‐photon latency, energy consumption, and resource utilization while satisfying the privacy and security constraints. To this end, we propose a new multi‐objective optimization method using MoCell. Eventually, the experimental evaluations are designed to illustrate the effectiveness and superiority our proposed method.
Article
Task offloading and resource allocation are the major elements of edge computing. A reasonable task offloading strategy and resource allocation scheme can reduce task processing time and save system energy consumption. Most of the current studies on the task migration of edge computing only consider the resource allocation between terminals and edge servers, ignoring the huge computing resources in the cloud center. In order to sufficiently utilize the cloud and edge server resources, we propose a coarse-grained task offloading strategy and intelligent resource matching scheme under Cloud-Edge collaboration. We consider the heterogeneity of mobile devices and inter-channel interference, and we establish the task offloading decision of multiple end-users as a game-theory-based task migration model with the objective of maximizing system utility. In addition, we propose an improved game-theory-based particle swarm optimization algorithm to obtain task offloading strategies. Experimental results show that the proposed scheme outperforms other schemes with respect to latency and energy consumption, and it scales well with increases in the number of mobile devices.
Thesis
This PhD thesis, a collaboration between the Universities of Mons and Liege (Belgium), deals with the conceptualization and development of a versatile distributed cloud architecture for data management in the Smart Farming domain. This architecture is generic enough to be used in other domains. Researchers are pressured by funders to maintain and share their experimental and test databases for use in other projects. Data reuse is motivated by the possibility of investing in a wider range of projects while avoiding data-related redundancies. Our approach is in line with the Open Data and Open Science framework where distributed architectures are used for massive data storage. Many generic IoT architectures and platforms exist on the market to meet various needs. However, there is a lack of specialized tools for research and their valorization on the one hand and addressing the specific needs of communities of researchers on the other hand. Moreover, the existing platforms remain dependent on the maintenance and the will of the company and/or the community that develops them. In terms of scientific research, platforms exist in the form of ecosystems that are mostly compartmentalized, which does not allow for a practical industrial valorization of the research conducted. Based on these findings, we propose in this PhD thesis to design a Cloud architecture specific to Smart Farming, sustainable, improvable and adaptable according to the use cases without calling into question the whole architecture. We also propose the implementation of a value chain starting from the acquisition of data, their processing and storage, the hosting of applications allowing their exploitation until the valorization and their exploitation by the final user. Our research is based on a concrete use case that highlights the limitations that the cloud architecture must be able to address. This use case is the behavioral analysis of farm animals at pasture. Researchers are increasingly encouraged to preserve and exchange their data, which translates into needs for the durability of their infrastructure, traceability and documentation of their data, and standardization of their tools. They also need to develop real-time or batch processing chains to handle data from multiple sources and in various formats. Our architecture is innovative, modular and adaptable to a wide range of use cases without having to question its structure or its constituent software bricks. The use of interchangeable software components makes the architecture durable and makes it immune to the disappearance of one of its software components. On the other hand, a software brick can be replaced by another one that is more adapted or more efficient. In addition, it offers the possibility of hosting and subsequently monetizing the applications developed by researchers. Its Edge Computing component (processing capacity located at the edge of the network) enables the deployment of micro-services and Artificial Intelligence (AI) algorithms adapted as close as possible to the sensors, using containerization techniques.
Preprint
Over the last decade, the cloud computing landscape has transformed from a centralised architecture made of large data centres to a distributed and heterogeneous architecture embracing edge and IoT units. This shift has created the so-called cloud-edge continuum, which closes the gap between large data centres and end-user devices. Existing solutions for programming the continuum are, however, dominated by proprietary silos and incompatible technologies, built around dedicated devices and run-time stacks. In this position paper, we motivate the need for an interoperable environment that would run seamlessly across hardware devices and software stacks, while achieving good performance and a high level of security -- a critical requirement when processing data off-premises. We argue that the technology provided by WebAssembly running on modern virtual machines and shielded within trusted execution environments, combined with a core set of services and support libraries, allows us to meet both goals. We also present preliminary results from a prototype deployed on the cloud-edge continuum.
Chapter
The internet of things describes the connection of distinctive embedded computing devices within the internet. It is the network of connection of physical things that has electronics that have been embedded within their architecture in order to sense and communicate to an external environment. IoT has turned up as a very powerful and promising technology, which brings up significant economic, social, and technical development. Meanwhile, it also brings up various security challenges. At present, nearly nine billion ‘things' (physical objects) are connected to the internet. Security is the major concern nowadays as the risks have very high consequences. This chapter presents a detailed view on the internet of things and the advancements of various technologies like cloud, fog, edge computing, IoT architectures, along with various technologies used to prevent and resolve these security and privacy issues of IoT. Finally, future research opportunities and challenges are discussed.
Chapter
Natural language interfaces are gaining popularity as an alternative interface for non-technical users. Natural language interface to database (NLIDB) systems have been attracting considerable interest recently that are being developed to accept user’s query in natural language (NL), and then converting this NL query to an SQL query, the SQL query is executed to extract the resultant data from the database. This Text-to-SQL task is a long-standing, open problem, and towards solving the problem, the standard approach that is followed is to implement a sequence-to-sequence model. In this paper, I recast the Text-to-SQL task as a machine translation problem using sequence-to-sequence-style neural network models. To this end, I have introduced a parallel corpus that I have developed using the WikiSQL dataset. Though there are a lot of work done in this area using sequence-to-sequence-style models, most of the state-of-the-art models use semantic parsing or a variation of it. None of these models’ accuracy exceeds 90%. In contrast to it, my model is based on a very simple architecture as it uses an open-source neural machine translation toolkit OpenNMT, that implements a standard SEQ2SEQ model, and though my model’s performance is not better than the said models in predicting on test and development datasets, its training accuracy is higher than any existing NLIDB system to the best of my knowledge.
Purpose This study aims to provide a systematic review of the existing literature on the applications of deep learning (DL) in hospitality, tourism and travel as well as an agenda for future research. Design/methodology/approach Covering a five-year time span (2017–2021), this study systematically reviews journal articles archived in four academic databases: Emerald Insight, Springer, Wiley Online Library and ScienceDirect. All 159 articles reviewed were characterised using six attributes: publisher, year of publication, country studied, type of value created, application area and future suggestions (and/or limitations). Findings Five application areas and six challenge areas are identified, which characterise the application of DL in hospitality, tourism and travel. In addition, it is observed that DL is mainly used to develop novel models that are creating business value by forecasting (or projecting) some parameter(s) and promoting better offerings to tourists. Research limitations/implications Although a few prior papers have provided a literature review of artificial intelligence in tourism and hospitality, none have drilled-down to the specific area of DL applications within the context of hospitality, tourism and travel. Originality/value To the best of the authors’ knowledge, this paper represents the first theoretical review of academic research on DL applications in hospitality, tourism and travel. An integrated framework is proposed to expose future research trajectories wherein scholars can contribute significant value. The exploration of the DL literature has significant implications for industry and practice, given that this, as far as the authors know, is the first systematic review of existing literature in this research area.
Chapter
This chapter examines Industry 4.0 driven supply chains with a specific focus on technological advancements regarding logistics service providers. The aim of this work is to identify the service characteristics of Logistics Service Providers (LSPs) and their technological advancement to satisfy their customer needs. The chapter provides novel insights about Industry 4.0-driven Supplier Selection and Evaluation (SSE) processes, and risks involved in changing LSPs, as well as how the organisation can plan a seamless LSP transition.
Conference Paper
Full-text available
Despite the broad utilization of cloud computing, some applications and services still cannot benefit from this popular computing paradigm due to inherent problems of cloud computing such as unacceptable latency, lack of mobility support and location-awareness. As a result, fog computing, has emerged as a promising infrastructure to provide elastic resources at the edge of network. In this paper, we have discussed current definitions of fog computing and similar concepts, and proposed a more comprehensive definition. We also analyzed the goals and challenges in fog computing platform, and presented platform design with several exemplar applications. We finally implemented and evaluated a prototype fog computing platform.
Article
Full-text available
In the inaugural issue of MC2R in April 1997 [24], I highlighted the seminal influence of mobility in computing. At that time, the goal of "information at your fingertips anywhere, anytime" was only a dream. Today, through relentless pursuit of innovations in wireless technology, energy-efficient portable hardware and adaptive software, we have largely attained this goal. Ubiquitous email and Web access is a reality that is experienced by millions of users worldwide through their Blackberries, iPhones, iPads, Windows Phone devices, and Android-based devices. Mobile Web-based services and location-aware advertising opportunities have emerged, triggering large commercial investments. Mobile computing has arrived as a lucrative business proposition. Looking ahead, what are the dreams that will inspire our future efforts in mobile computing? We begin this paper by considering some imaginary mobile computing scenarios from the future. We then extract the deep assumptions implicit in these scenarios, and use them to speculate on the future trajectory of mobile computing.
Article
Full-text available
We describe the architecture and prototype implementation of an assistive system based on Google Glass devices for users in cognitive decline. It combines the first-person image capture and sensing capabilities of Glass with remote processing to perform real-time scene interpretation. The system architecture is multi-tiered. It offers tight end-to-end latency bounds on compute-intensive operations, while addressing concerns such as limited battery capacity and limited processing capability of wearable devices. The system gracefully degrades services in the face of network failures and unavailability of distant architectural tiers.
Conference Paper
Full-text available
Despite the tremendous market penetration of smartphones, their utility has been and will remain severely limited by their battery life. A major source of smartphone battery drain is accessing the Internet over cellular or WiFi connection when running various apps and services. Despite much anecdotal evidence of smartphone users experiencing quicker battery drain in poor signal strength, there has been limited understanding of how often smartphone users experience poor signal strength and the quantitative impact of poor signal strength on the phone battery drain. The answers to such questions are essential for diagnosing and improving cellular network services and smartphone battery life and help to build more accurate online power models for smartphones, which are building blocks for energy profiling and optimization of smartphone apps. In this paper, we conduct the first measurement and modeling study of the impact of wireless signal strength on smartphone energy consumption. Our study makes four contributions. First, through analyzing traces collected on 3785 smartphones for at least one month, we show that poor signal strength of both 3G and WiFi is routinely experienced by smartphone users, both spatially and temporally. Second, we quantify the extra energy consumption on data transfer induced by poor wireless signal strength. Third, we develop a new power model for WiFi and 3G that incorporates the signal strength factor and significantly improves the modeling accuracy over the previous state of the art. Finally, we perform what-if analysis to quantify the potential energy savings from opportunistically delaying network traffic by exploring the dynamics of signal strength experienced by users.
Article
Full-text available
Smartphones have exploded in popularity in recent years, becoming ever more sophisticated and capable. As a result, developers worldwide are building increasingly complex applications that require ever increasing amounts of computational power and energy. In this paper we propose ThinkAir, a framework that makes it simple for developers to migrate their smartphone applications to the cloud. ThinkAir exploits the concept of smartphone virtualization in the cloud and provides method-level computation offloading. Advancing on previous work, it focuses on the elasticity and scalability of the cloud and enhances the power of mobile cloud computing by parallelizing method execution using multiple virtual machine (VM) images. We implement ThinkAir and evaluate it with a range of benchmarks starting from simple micro-benchmarks to more complex applications. First, we show that the execution time and energy consumption decrease two orders of magnitude for a N-queens puzzle application and one order of magnitude for a face detection and a virus scan application. We then show that a parallelizable application can invoke multiple VMs to execute in the cloud in a seamless and on-demand manner such as to achieve greater reduction on execution time and energy consumption. We finally use a memory-hungry image combiner tool to demonstrate that applications can dynamically request VMs with more computational power in order to meet their computational requirements.
Article
Full-text available
Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Book
Full-text available
Foreword by Peter Friess & Gérald Santuci: It goes without saying that we are very content to publish this Clusterbook and to leave it today to your hands. The Cluster of European Research projects on the Internet of Things – CERP-IoT – comprises around 30 major research initiatives, platforms and networks work-ing in the field of identification technologies such as Radio Frequency Identification and in what could become tomorrow an Internet-connected and inter-connected world of objects. The book in front of you reports to you about the research and innovation issues at stake and demonstrates approaches and examples of possible solutions. If you take a closer look you will realise that the Cluster reflects exactly the ongoing developments towards a future Internet of Things – growing use of Identification technologies, massive deployment of simple and smart devices, increasing connection between objects and systems. Of course, many developments are less directly derived from the core research area but contribute significantly in creating the “big picture” and the paradigm change. We are also conscious to maintain Europe’s strong position in these fields and the result being achieved, but at the same time to understand the challenges ahead as a global endeavour with our international partners. As it regards international co-operation, the cluster is committed to increasing the number of common activities with the existing international partners and to looking for various stakeholders in other countries. However, we are just at the beginning and, following the prognostics which predict 50 to 100 billion devices to be connected by 2020, the true research work starts now. The European Commission is decided to implement its Internet of Things policy for supporting an economic revival and providing better life to its citizens, and it has just selected from the last call for proposals several new Internet of Things research projects as part of the 7th Framework Programme on European Research. We wish you now a pleasant and enjoyable reading and would ask you to stay connected with us for the future. Special thanks are expressed to Harald Sundmaeker and his team who did a remarkable effort in assembling this Clusterbook.
Conference Paper
Full-text available
Energy efficiency is a fundamental consideration for mo-bile devices. Cloud computing has the potential to save mobile client energy but the savings from offloading the computation need to exceed the energy cost of the addi-tional communication. In this paper we provide an analysis of the critical fac-tors affecting the energy consumption of mobile clients in cloud computing. Further, we present our measure-ments about the central characteristics of contemporary mobile handheld devices that define the basic balance be-tween local and remote computing. We also describe a concrete example, which demonstrates energy savings. We show that the trade-offs are highly sensitive to the exact characteristics of the workload, data communica-tion patterns and technologies used, and discuss the im-plications for the design and engineering of energy effi-cient mobile cloud computing solutions.
Conference Paper
Full-text available
Cloud computing has seen tremendous growth, particularly for commercial web applications. The on-demand, pay-as-you-go model creates a flexible and cost-effective means to access compute resources. For these reasons, the scientific computing community has shown increasing interest in exploring cloud computing. However, the underlying implementation and performance of clouds are very different from those at traditional supercomputing centers. It is therefore critical to evaluate the performance of HPC applications in today's cloud environments to understand the tradeoffs inherent in migrating to the cloud. This work represents the most comprehensive evaluation to date comparing conventional HPC platforms to Amazon EC2, using real applications representative of the workload at a typical supercomputing center. Overall results indicate that EC2 is six times slower than a typical mid-range Linux cluster, and twenty times slower than a modern HPC system. The interconnect on the EC2 cloud platform severely limits performance and causes significant variability.
Conference Paper
Full-text available
This paper presents MAUI, a system that enables fine-grained energy-aware offload of mobile code to the infrastructure. Previous approaches to these problems either relied heavily on programmer support to partition an application, or they were coarse-grained re- quiring full process (or full VM) migration. MAUI uses the benefits of a managed code environment to offer the best of both worlds: it supports fine-grained code offload to maximize energy savings with minimal burden on the programmer. MAUI decides at run- time which methods should be remotely executed, driven by an op- timization engine that achieves the best energy savings possible un- der the mobile device's current connectivity constrains. In our eval- uation, we show that MAUI enables: 1) a resource-intensive face recognition application that consumes an order of magnitude less energy, 2) a latency-sensitive arcade game application that doubles its refresh rate, and 3) a voice-based language translation applica- tion that bypasses the limitations of the smartphone environment by executing unsupported components remotely.
Article
Full-text available
The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?
Article
Full-text available
CLOUD COMPUTING, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1,000 servers for one hour costs no more than using one server for 1,000.
Article
Full-text available
"Information at your fingertips anywhere, anytime" has been the driving vision of mobile computing for the past two decades. Through relentless pursuit of this vision, spurring innovations in wireless technology, energy-efficient portable hardware and adaptive software, we have now largely attained this goal. Ubiquitous email and Web access is a reality that is experienced by millions of users worldwide through their BlackBerries, iPhones, Windows Mobile, and other portable devices. Continuing on this road, mobile Web-based services and location-aware advertising opportunities have begun to appear, triggering large commercial investments. Mobile computing has arrived as a lucrative business proposition.
Article
Full-text available
Mobile computing continuously evolve through the sustained effort of many researchers. It seamlessly augments users' cognitive abilities via compute-intensive capabilities such as speech recognition, natural language processing, etc. By thus empowering mobile users, we could transform many areas of human activity. This article discusses the technical obstacles to these transformations and proposes a new architecture for overcoming them. In this architecture, a mobile user exploits virtual machine (VM) technology to rapidly instantiate customized service software on a nearby cloudlet and then uses that service over a wireless LAN; the mobile device typically functions as a thin client with respect to the s 5a8 ervice. A cloudlet is a trusted, resource-rich computer or cluster of computers that's well-connected to the Internet and available for use by nearby mobile devices. Our strategy of leveraging transiently customized proximate infrastructure as a mobile device moves with its user through the physical world is called cloudlet-based, resource-rich, mobile computing. Crisp interactive response, which is essential for seamless augmentation of human cognition, is easily achieved in this architecture because of the cloudlet's physical proximity and one-hop network latency. Using a cloudlet also simplifies the challenge of meeting the peak bandwidth demand of multiple users interactively generating and receiving media such as high-definition video and high-resolution images. Rapid customization of infrastructure for diverse applications emerges as a critical requirement, and our results from a proof-of-concept prototype suggest that VM technology can indeed help meet this requirement.
Article
Full-text available
The data centers used to create cloud services represent a significant investment in capital outlay and ongoing costs. Accordingly, we first examine the costs of cloud service data centers today. The cost breakdown reveals the importance of optimizing work completed per dollar invested. Unfortunately, the resources inside the data centers often operate at low utilization due to resource stranding and fragmentation. To attack this first problem, we propose (1) increasing network agility, and (2) providing appropriate incentives to shape resource consumption. Second, we note that cloud service providers are building out geo-distributed networks of data centers. Geo-diversity lowers latency to users and increases reliability in the presence of an outage taking out an entire site. However, without appropriate design and management, these geo-diverse data center networks can raise the cost of providing service. Moreover, leveraging geo-diversity requires services be designed to benefit from it. To attack this problem, we propose (1) joint optimization of network and data center resources, and (2) new systems and mechanisms for geo-distributing state.
Article
Full-text available
We describe a new approach to power saving and battery life extension on an untethered laptop through wireless remote processing of power-costly tasks. We ran a series of experiments comparing the power consumption of processes run locally with that of the same processes run remotely. We examined the trade-off between communication power expenditures and the power cost of local processing. This paper describes our methodology and results of our experiments. We suggest ways to further improve this approach, and outline a software design to support remote process execution.
Article
Full-text available
Although successive generations of middleware (such as RPC, CORBA, and DCOM) have made it easier to connect distributed programs, the process of distributed application decomposition has changed little: programmers manually divide applications into sub-programs and manually assign those subprograms to machines. Often the techniques used to choose a distribution are ad hoc and create one-time solutions biased to a specific combination of users, machines, and networks. We assert that system software, not the programmer, should manage the task of distributed decomposition. To validate our assertion we present Coign, an automatic distributed partitioning system that significantly eases the development of distributed applications. Given an application (in binary form) built from distributable COM components, Coign constructs a graph model of the application's inter-component communication through scenario-based profiling. Later, Coign applies a graph-cutting algorithm to partition the application across a network and minimize execution delay due to network communication. Using Coign, even an end user (without access to source code) can transform a non-distributed application into an optimized, distributed application. Coign has automatically distributed binaries from over 2 million lines of application code, including Mi- crosoft's PhotoDraw 2000 image processor. To our knowledge, Coign is the first system to automatically partition and distribute binary applications. 1.
Article
Cloud computing heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems. Is cloud computing the ultimate solution for extending battery lifetimes of mobile systems?
Article
The data centers used to create cloud services represent a significant investment in capital outlay and ongoing costs. Accordingly, we first examine the costs of cloud service data centers today. The cost breakdown reveals the importance of optimizing work completed per dollar invested. Unfortunately, the resources inside the data centers often operate at low utilization due to resource stranding and fragmentation. To attack this first problem, we propose (1) increasing network agility, and (2) providing appropriate incentives to shape resource consumption. Second, we note that cloud service providers are building out geo-distributed networks of data centers. Geo-diversity lowers latency to users and increases reliability in the presence of an outage taking out an entire site. However, without appropriate design and management, these geo-diverse data center networks can raise the cost of providing service. Moreover, leveraging geo-diversity requires services be designed to benefit from it. To attack this problem, we propose (1) joint optimization of network and data center resources, and (2) new systems and mechanisms for geo-distributing state.
Article
This paper presents an overview of the MobilityFirst network architecture, currently under development as part of the US National Science Foundation's Future Internet Architecture (FIA) program. The proposed architecture is intended to directly address the challenges of wireless access and mobility at scale, while also providing new services needed for emerging mobile Internet application scenarios. After briefly outlining the original design goals of the project, we provide a discussion of the main architectural concepts behind the network design, identifying key features such as separation of names from addresses, public-key based globally unique identifiers (GUIDs) for named objects, global name resolution service (GNRS) for dynamic binding of names to addresses, storage-aware routing and late binding, content- and context-aware services, optional in-network compute layer, and so on. This is followed by a brief description of the MobilityFirst protocol stack as a whole, along with an explanation of how the protocol works at end-user devices and inside network routers. Example of specific advanced services supported by the protocol stack, including multi-homing, mobility with disconnection, and content retrieval/caching are given for illustration. Further design details of two key protocol components, the GNRS name resolution service and the GSTAR routing protocol, are also described along with sample results from evaluation. In conclusion, a brief description of an ongoing multi-site experimental proof-of-concept deployment of the MobilityFirst protocol stack on the GENI testbed is provided.
Article
MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time.
Article
The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. In a large cluster, thousands of servers both host directly attached storage and execute user application tasks. By distributing storage and computation across many servers, the resource can grow with demand while remaining economical at every size. We describe the architecture of HDFS and report on experience using HDFS to manage 25 petabytes of enterprise data at Yahoo!.
Article
Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein, sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fuelled by the recent adaptation of a variety of enabling device technologies such as RFID tags and readers, near field communication (NFC) devices and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A cloud implementation using Aneka, which is based on interaction of private and public clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
Conference Paper
Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device.
Conference Paper
We have designed and implemented the Google File Sys- tem, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. While sharing many of the same goals as previous dis- tributed file systems, our design has been driven by obser- vations of our application workloads and technological envi- ronment, both current and anticipated, that reflect a marked departure from some earlier file system assumptions. This has led us to reexamine traditional choices and explore rad- ically different design points. The file system has successfully met our storage needs. It is widely deployed within Google as the storage platform for the generation and processing of data used by our ser- vice as well as research and development efforts that require large data sets. The largest cluster to date provides hun- dreds of terabytes of storage across thousands of disks on over a thousand machines, and it is concurrently accessed by hundreds of clients. In this paper, we present file system interface extensions designed to support distributed applications, discuss many aspects of our design, and report measurements from both micro-benchmarks and real world use.
Conference Paper
MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
Conference Paper
While many public cloud providers offer pay-as-you-go computing, their varying approaches to infrastructure, virtualization, and software services lead to a problem of plenty. To help customers pick a cloud that fits their needs, we develop CloudCmp, a systematic comparator of the performance and cost of cloud providers. CloudCmp measures the elastic computing, persistent storage, and networking services offered by a cloud along metrics that directly reflect their impact on the performance of customer applications. CloudCmp strives to ensure fairness, representativeness, and compliance of these measurements while limiting measurement cost. Applying CloudCmp to four cloud providers that together account for most of the cloud customers today, we find that their offered services vary widely in performance and costs, underscoring the need for thoughtful provider selection. From case studies on three representative cloud applications, we show that CloudCmp can guide customers in selecting the best-performing provider for their applications.
Conference Paper
PROFINET is the industrial Ethernet standard devised by PROFIBUS International (PI) for either modular machine and plant engineering or distributed IO. Using a plant-wide multi-vendor engineering for modular machines, commissioning time as well as costs are reduced. With distributed IO IO-controllers (e.g., PLCs) with their associated IO-devices may also be integrated into PROFINET solutions. Communication is a major part of PROFINET. Real-time communication for standard factory automation applications as well as extensions which enables motion control applications is covered in a common real-time protocol. The advantages of modular and multi-vendor engineering and distributed IO can be used even in applications with time-critical data transfer requirements.
OpenFog Architecture Overview. OpenFog Consortium Architecture Working Group
  • Openfog Architecture
  • Overview
Fog computing and its role in the internet of things
  • F Bonomi
  • R Milito
  • J Zhu
  • S Addepalli
F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, "Fog computing and its role in the internet of things," in Proceedings of the first edition of the MCC workshop on Mobile cloud computing. ACM, 2012, pp. 13-16.
The hadoop distributed file system
  • K Shvachko
  • H Kuang
  • S Radia
  • R Chansler
K. Shvachko, H. Kuang, S. Radia, and R. Chansler, "The hadoop distributed file system," in Mass Storage Systems and Technologies (MSST), 2010 IEEE 26th Symposium on. IEEE, 2010, pp. 1-10.
Spark: cluster computing with working sets
  • M Zaharia
  • M Chowdhury
  • M J Franklin
  • S Shenker
  • I Stoica
M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica, "Spark: cluster computing with working sets," in Proceedings of the 2nd USENIX conference on Hot topics in cloud computing, vol. 10, 2010, p. 10.
Towards wearable cognitive assistance
  • K Ha
  • Z Chen
  • W Hu
  • W Richter
  • P Pillai
  • M Satyanarayanan
K. Ha, Z. Chen, W. Hu, W. Richter, P. Pillai, and M. Satyanarayanan, "Towards wearable cognitive assistance," in Proceedings of the 12th annual international conference on Mobile systems, applications, and services. ACM, 2014, pp. 68-81.
PROFINET-scalable factory communication for all applications," in Factory Communication Systems
  • J Feld
J. Feld, "PROFINET-scalable factory communication for all applications," in Factory Communication Systems, 2004. Proceedings. 2004 IEEE International Workshop on. IEEE, 2004, pp. 33-38.
Characterizing and modeling the impact of wireless signal strength on smartphone battery drain
  • N Ding
  • D Wagner
  • X Chen
  • A Pathak
  • Y C Hu
  • A Rice
N. Ding, D. Wagner, X. Chen, A. Pathak, Y. C. Hu, and A. Rice, "Characterizing and modeling the impact of wireless signal strength on smartphone battery drain," SIGMETRICS Perform. Eval. Rev., vol. 41, no. 1, pp. 29-40, Jun. 2013. [Online]. Available: http://doi.acm.org/10.1145/2494232.2466586