Chapter

IntellIoT: Intelligent IoT Environments

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Traditional IoT setups are cloud-centric and typically focused around a centralized IoT platform to which data is uploaded for further processing. Next generation IoT applications are incorporating technologies such as artificial intelligence, augmented reality, and distributed ledgers to realize semi-autonomous behaviour of vehicles, guidance for human users, and machine-to-machine interactions in a trustworthy manner. Such applications require more dynamic IoT environments, which can operate locally without the necessity to communicate with the Cloud. In this paper, we describe three use cases of next generation IoT applications and highlight associated challenges for future research. We further present the IntellIoT framework that comprises the required components to address the identified challenges.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In this paper, the problem of training federated learning (FL) algorithms over a realistic wireless network is studied. In the considered model, wireless users execute an FL algorithm while training their local FL models using their own data and transmitting the trained local FL models to a base station (BS) that generates a global FL model and sends the model back to the users. Since all training parameters are transmitted over wireless links, the quality of training is affected by wireless factors such as packet errors and the availability of wireless resources. Meanwhile, due to the limited wireless bandwidth, the BS needs to select an appropriate subset of users to execute the FL algorithm so as to build a global FL model accurately. This joint learning, wireless resource allocation, and user selection problem is formulated as an optimization problem whose goal is to minimize an FL loss function that captures the performance of the FL algorithm. To seek the solution, a closed-form expression for the expected convergence rate of the FL algorithm is first derived to quantify the impact of wireless factors on FL. Then, based on the expected convergence rate of the FL algorithm, the optimal transmit power for each user is derived, under a given user selection and uplink resource block (RB) allocation scheme. Finally, the user selection and uplink RB allocation is optimized so as to minimize the FL loss function. Simulation results show that the proposed joint federated learning and communication framework can improve the identification accuracy by up to 1:4%, 3:5% and 4:1%, respectively, compared to: 1) An optimal user selection algorithm with random resource allocation, 2) a standard FL algorithm with random user selection and resource allocation, and 3) a wireless optimization algorithm that minimizes the sum packet error rates of all users while being agnostic to the FL parameters.
Article
Full-text available
In the industrial Internet of Things domain, applications are moving from the Cloud into the Edge, closer to the devices producing and consuming data. This means that applications move from the scalable and homogeneous Cloud environment into a potentially constrained heterogeneous Edge network. Making Edge applications reliable enough to fulfill Industry 4.0 use cases remains an open research challenge. Maintaining operation of an Edge system requires advanced management techniques to mitigate the failure of devices. This article tackles this challenge with a twofold approach: (1) a policy-enabled failure detector that enables adaptable failure detection and (2) an allocation component for the efficient selection of failure mitigation actions. The parameters and performance of the failure detection approach are evaluated, and the performance of an energy-efficient allocation technique is measured. Finally, a vision for a complete system and an example use case are presented.
Article
Full-text available
Various tools support developers in the creation of IoT applications. In general, such tools focus on the business logic, which is important for application development, however, for IoT applications in particular, it is crucial to consider the network, as they are intrinsically based on interconnected devices and services. IoT application developers do not have in depth expertise in configuring networks and physical connections between devices. Hence, approaches are required that automatically deduct these configurations. We address this challenge in this work with an architecture and associated data models that enable networking-aware IoT application development. We evaluate our approach in the context of an application for oil leakage detection in wind turbines.
Article
Full-text available
Fueled by the availability of more data and computing power, recent breakthroughs in cloud-based machine learning (ML) have transformed every aspect of our lives from face recognition and medical diagnosis to natural language processing. However, classical ML exerts severe demands in terms of energy, memory and computing resources, limiting their adoption for resource constrained edge devices. The new breed of intelligent devices and high-stake applications (drones, augmented/virtual reality, autonomous systems, etc.), requires a novel paradigm change calling for distributed, low-latency and reliable ML at the wireless network edge (referred to as edge ML). In edge ML, training data is unevenly distributed over a large number of edge nodes, which have access to a tiny fraction of the data. Moreover training and inference are carried out collectively over wireless links, where edge devices communicate and exchange their learned models (not their private data). In a first of its kind, this article explores key building blocks of edge ML, different neural network architectural splits and their inherent tradeoffs, as well as theoretical and technical enablers stemming from a wide range of mathematical disciplines. Finally, several case studies pertaining to various high-stake applications are presented demonstrating the effectiveness of edge ML in unlocking the full potential of 5G and beyond.
Article
Full-text available
With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of security-critical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing.
Article
Full-text available
Evaluation of moving target defense (MTD) effectiveness has become one of the fundamental problems in current studies. In this paper, an evaluation model of MTD effectiveness based on system attack surface (SAS) is proposed to extend this model covering enterprise-class topology and multi-layered moving target (MT) techniques. The model is focused on the problem of incorrect performance assessment caused by inaccurately characterizing the process of attacking and defending. Existing evaluation models often fail to describe MTD dynamically in a process. To deal with this static view, offensive and defensive process based on a player’s move is presented. Besides, it converts all the attack and defense actions into the process, and interactivities are evaluated by system view extended attack surface model. Previously proposed attack surface models are not concerned about links between nodes and vulnerabilities affected by topologies. After comprehensively analyzing the impact of interactions in the system, a SAS model is proposed to demonstrate how resources of the system are affected by actions of attackers and defenders, thus ensuring the correctness of parameters for SAS in measuring MT technology. Moreover, by generating a sequence of those shifting parameters, a nonhomogeneous hierarchical hidden Markov model (NHHMM) is used to find the possible sequence of attacking states by introducing the partial Viterbi algorithm (PVA). Also, a sequence of attacking states is defined to illustrate how adversaries are handled by MT technologies and how much additional consumption costs are increased by system resource reconfiguration. Finally, the simulation of the proposed approach is given in a case study to demonstrate the feasibility and validity of the proposed effectiveness evaluation model in a systematic and dynamic view.
Article
Full-text available
Recent advances in automatic machine learning (aML) allow solving problems without any human intervention. However, sometimes a human-in-the-loop can be beneficial in solving computationally hard problems. In this paper we provide new experimental insights on how we can improve computational intelligence by complementing it with human intelligence in an interactive machine learning approach (iML). For this purpose, we used the Ant Colony Optimization (ACO) framework, because this fosters multi-agent approaches with human agents in the loop. We propose unification between the human intelligence and interaction skills and the computational power of an artificial system. The ACO framework is used on a case study solving the Traveling Salesman Problem, because of its many practical implications, e.g. in the medical domain. We used ACO due to the fact that it is one of the best algorithms used in many applied intelligence problems. For the evaluation we used gamification, i.e. we implemented a snake-like game called Traveling Snakesman with the MAX–MIN Ant System (MMAS) in the background. We extended the MMAS–Algorithm in a way, that the human can directly interact and influence the ants. This is done by “traveling” with the snake across the graph. Each time the human travels over an ant, the current pheromone value of the edge is multiplied by 5. This manipulation has an impact on the ant’s behavior (the probability that this edge is taken by the ant increases). The results show that the humans performing one tour through the graphs have a significant impact on the shortest path found by the MMAS. Consequently, our experiment demonstrates that in our case human intelligence can positively influence machine intelligence. To the best of our knowledge this is the first study of this kind.
Article
Full-text available
48 pages, 7 tables, 11 figures, Open Access ***************************************************************************************** The data (categories and features/objectives of the papers) of this survey are available at https://github.com/ashkan-software/fog-survey-data ***************************************************************************************** Complete list of conferences, journals, and magazines that publish state-of-the-art research papers on fog computing and its related edge computing paradigms is compiled along with this article and is available at https://anrlutdallas.github.io/resource/projects/fog-computing-conferences.html ***************************************************************************************** With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of security-critical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing.
Article
Full-text available
With the advent of smart homes, smart cities, and smart everything, the Internet of Things (IoT) has emerged as an area of incredible impact, potential, and growth, with Cisco Inc. predicting to have 50 billion connected devices by 2020. However, most of these IoT devices are easy to hack and compromise. Typically, these IoT devices are limited in compute, storage, and network capacity, and therefore they are more vulnerable to attacks than other endpoint devices such as smartphones, tablets, or computers. In this paper, we present and survey major security issues for IoT. We review and categorize popular security issues with regard to the IoT layered architecture, in addition to protocols used for networking, communication, and management. We outline security requirements for IoT along with the existing attacks, threats, and state-of-the-art solutions. Furthermore, we tabulate and map IoT security problems against existing solutions found in the literature. More importantly, we discuss, how blockchain, which is the underlying technology for bitcoin, can be a key enabler to solve many IoT security problems. The paper also identifies open research problems and challenges for IoT security.
Article
Full-text available
The increasing number of mobile users has givenimpetus to the demand for high data rate proximity services.The fifth generation (5G) wireless systems promise to improvethe existing technology according to the future demands andprovide a road-map for reliable and resource-efficient solutions.Device-to-device (D2D) communication has been envisioned as anallied technology of 5G wireless systems for providing servicesthat include live data and video sharing. D2D communicationtechnique opens new horizons of device-centric communications,i.e., exploiting direct D2D links instead of relying solely oncellular links. Offloading traffic from traditional network-centricentities to D2D network enables low computational complexity atthe base station besides increasing the network capacity. However,there are several challenges associated with D2D communication.In this article, we present a survey of the existing methodologiesrelated to aspects such as interference management, networkdiscovery, proximity services and network security in D2Dnetworks. We conclude by introducing new dimensions withregards to D2D communication and delineate aspects that require further research.
Article
Full-text available
The Internet of Things (IoT) envisions pervasive, connected, and smart nodes interacting autonomously while offering all sorts of services. Wide distribution, openness and relatively high processing power of IoT objects made them an ideal target for cyber attacks. Moreover, as many of IoT nodes are collecting and processing private information, they are becoming a goldmine of data for malicious actors. Therefore, security and specifically the ability to detect compromised nodes, together with collecting and preserving evidences of an attack or malicious activities emerge as a priority in successful deployment of IoT networks. In this paper, we first introduce existing major security and forensics challenges within IoT domain and then briefly discuss about papers published in this special issue targeting identified challenges.
Article
Full-text available
For safe and efficient planning and control in autonomous driving, we need a driving policy which can achieve desirable driving quality in long-term horizon with guaranteed safety and feasibility. Optimization-based approaches, such as Model Predictive Control (MPC), can provide such optimal policies, but their computational complexity is generally unacceptable for real-time implementation. To address this problem, we propose a fast integrated planning and control framework that combines learning- and optimization-based approaches in a two-layer hierarchical structure. The first layer, defined as the "policy layer", is established by a neural network which learns the long-term optimal driving policy generated by MPC. The second layer, called the "execution layer", is a short-term optimization-based controller that tracks the reference trajecotries given by the "policy layer" with guaranteed short-term safety and feasibility. Moreover, with efficient and highly-representative features, a small-size neural network is sufficient in the "policy layer" to handle many complicated driving scenarios. This renders online imitation learning with Dataset Aggregation (DAgger) so that the performance of the "policy layer" can be improved rapidly and continuously online. Several exampled driving scenarios are demonstrated to verify the effectiveness and efficiency of the proposed framework.
Conference Paper
Full-text available
Internet of Things typically involves a significant number of smart sensors sensing information from the environment and sharing it to a cloud service for processing. Various architectural abstractions, such as Fog and Edge computing, have been proposed to localize some of the processing near the sensors and away from the central cloud servers. In this paper, we propose Edge-Fog Cloud which distributes task processing on the participating cloud resources in the network. We develop the Least Processing Cost First (LPCF) method for assigning the processing tasks to nodes which provide the optimal processing time and near optimal networking costs. We evaluate LPCF in a variety of scenarios and demonstrate its effectiveness in finding the processing task assignments.
Article
Full-text available
Cloud computing has brought a new paradigm shift in technology industry and becoming increasingly popular day by day. The small to medium enterprises (SMEs) are now adopting cloud computing in much more higher rate than large enterprises. That raises a debate whether this cloud computing technology will penetrate throughout the IT industry or not. The SMEs are adopting cloud computing for the low cost implementation of total IT infrastructure and software system whereas the large enterprises are relying on their own infrastructure for data security, privacy and flexibility to access their own infrastructure. In this paper, we provide a survey about possible limitations of cloud computing that is delaying its penetration. We also identify the ongoing potential solutions that will help the enterprises to adopt cloud computing for their IT infrastructure and software systems.
Conference Paper
Full-text available
Despite the broad utilization of cloud computing, some applications and services still cannot benefit from this popular computing paradigm due to inherent problems of cloud computing such as unacceptable latency, lack of mobility support and location-awareness. As a result, fog computing, has emerged as a promising infrastructure to provide elastic resources at the edge of network. In this paper, we have discussed current definitions of fog computing and similar concepts, and proposed a more comprehensive definition. We also analyzed the goals and challenges in fog computing platform, and presented platform design with several exemplar applications. We finally implemented and evaluated a prototype fog computing platform.
Article
Full-text available
We describe the architecture and prototype implementation of an assistive system based on Google Glass devices for users in cognitive decline. It combines the first-person image capture and sensing capabilities of Glass with remote processing to perform real-time scene interpretation. The system architecture is multi-tiered. It offers tight end-to-end latency bounds on compute-intensive operations, while addressing concerns such as limited battery capacity and limited processing capability of wearable devices. The system gracefully degrades services in the face of network failures and unavailability of distant architectural tiers.
Article
The pervasive need to safely share and store information between devices calls for the replacement of centralized trust architectures with decentralized ones. DLTs are seen as the most promising enabler of decentralized trust, but they still lack technological maturity, and their successful adoption depends on the understanding of the fundamental design trade-offs and their reflection in the actual technical design. This work focuses on the challenges and potential solutions for an effective integration of DLTs in the context of the Internet of Things (IoT). We first introduce the landscape of IoT applications and discuss the limitations and opportunities offered by DLTs. Then we review the technical challenges encountered in the integration of resource-constrained devices with distributed trust networks. We describe the common traits of lightweight synchronization protocols, and propose a novel classification rooted in the IoT perspective. We identify the need for receiving ledger information at the endpoint devices, implying a two-way data exchange that contrasts with the conventional uplink-oriented communication technologies intended for IoT systems.
Article
Motivated by the increasing computational capacity of wireless user equipments (UEs), e.g., smart phones, tablets, or vehicles, as well as the increasing concerns about sharing private data, a new machine learning model has emerged, namely federated learning (FL), that allows a decoupling of data acquisition and computation at the central unit. Unlike centralized learning taking place in a data center, FL usually operates in a wireless edge network where the communication medium is resource-constrained and unreliable. Due to limited bandwidth, only a portion of UEs can be scheduled for updates at each iteration. Due to the shared nature of the wireless medium, transmissions are subjected to interference and are not guaranteed. The performance of FL system in such a setting is not well understood. In this paper, an analytical model is developed to characterize the performance of FL in wireless networks. Particularly, tractable expressions are derived for the convergence rate of FL in a wireless setting, accounting for effects from both scheduling schemes and inter-cell interference. Using the developed analysis, the effectiveness of three different scheduling policies, i.e., random scheduling (RS), round robin (RR), and proportional fair (PF), are compared in terms of FL convergence rate. It is shown that running FL with PF outperforms RS and RR if the network is operating under a high signal-to-interference-plus-noise ratio (SINR) threshold, while RR is more preferable when the SINR threshold is low. Moreover, the FL convergence rate decreases rapidly as the SINR threshold increases, thus confirming the importance of compression and quantization of the update parameters. The analysis also reveals a trade-off between the number of scheduled UEs and subchannel bandwidth under a fixed amount of available spectrum.
Article
Smart agriculture systems based on Internet of Things are the most promising to increase food production and reduce the consumption of resources like fresh water. In this study, we present a smart agriculture IoT system based on deep reinforcement learning which includes four layers, namely agricultural data collection layer, edge computing layer, agricultural data transmission layer, and cloud computing layer. The presented system integrates some advanced information techniques, especially artificial intelligence and cloud computing, with agricultural production to increase food production. Specially, the most advanced artificial intelligence model, deep reinforcement learning is combined in the cloud layer to make immediate smart decisions such as determining the amount of water needed to be irrigated for improving crop growth environment. We present several representative deep reinforcement learning models with their broad applications. Finally, we talk about the open challenges and the potential applications of deep reinforcement learning in smart agriculture IoT systems.
Conference Paper
Safety and efficiency are two key elements for planning and control in autonomous driving. Theoretically, model-based optimization methods, such as Model Predictive Control (MPC), can provide such optimal driving policies. Their computational complexity, however, grows exponentially with horizon length and number of surrounding vehicles. This makes them impractical for real-time implementation, particularly when nonlinear models are considered. To enable a fast and approximately optimal driving policy, we propose a safe imitation framework, which contains two hierarchical layers. The first layer, defined as the policy layer, is represented by a neural network that imitates a long-term expert driving policy via imitation learning. The second layer, called the execution layer, is a short-term model-based optimal controller that tracks and further fine-tunes the reference trajectories proposed by the policy layer with guaranteed short-term collision avoidance. Moreover, to reduce the distribution mismatch between the training set and the real world, Dataset Aggregation is utilized so that the performance of the policy layer can be improved from iteration to iteration. Several highway driving scenarios are demonstrated in simulations, and the results show that the proposed framework can achieve similar performance as sophisticated long-term optimization approaches but with significantly improved computational efficiency.
Article
Ensuring ultrareliable and low-latency communication (URLLC) for 5G wireless networks and beyond is of capital importance and is currently receiving tremendous attention in academia and industry. At its core, URLLC mandates a departure from expected utility-based network design approaches, in which relying on average quantities (e.g., average throughput, average delay, and average response time) is no longer an option but a necessity. Instead, a principled and scalable framework which takes into account delay, reliability, packet size, network architecture and topology (across access, edge, and core), and decision-making under uncertainty is sorely lacking. The overarching goal of this paper is a first step to filling this void. Towards this vision, after providing definitions of latency and reliability, we closely examine various enablers of URLLC and their inherent tradeoffs. Subsequently, we focus our attention on a wide variety of techniques and methodologies pertaining to the requirements of URLLC, as well as their applications through selected use cases. These results provide crisp insights for the design of low-latency and high-reliability wireless networks.
Article
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users' mobile devices instead of logging it to a data center for training. In federated optimziation, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network --- as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of \federated optimization.
Conference Paper
Data Stream Processing (DSP) applications are widely used to timely extract information from distributed data sources, such as sensing devices, monitoring stations, and social networks. To successfully handle this ever increasing amount of data, recent trends investigate the possibility of exploiting decentralized computational resources (e.g., Fog computing) to define the applications placement. Several placement policies have been proposed in the literature, but they are based on different assumptions and optimization goals and, as such, they are not completely comparable to each other. In this paper we study the placement problem for distributed DSP applications. Our contributions are twofold. We provide a general formulation of the optimal DSP placement (for short, ODP) as an Integer Linear Programming problem which takes explicitly into account the heterogeneity of computing and networking resources and which encompasses - as special cases - the different solutions proposed in the literature. We present an ODP-based scheduler for the Apache Storm DSP framework. This allows us to compare some well-known centralized and decentralized placement solutions. We also extensively analyze the ODP scalability with respect to various parameter settings.
Manos Papoutsakis, Konstantinos Fysarakis, and Ahmad Caracalli. Networking-aware IoT application development
  • Arne Bröring
  • Jan Seeger
Arne Bröring, Jan Seeger, Manos Papoutsakis, Konstantinos Fysarakis, and Ahmad Caracalli. Networking-aware IoT application development. Sensors, 20(3):897, 2020.
A decade in Hindsight: the missing bridge between multi-agent systems and the world wide web
  • A Ciortea
  • S Mayer
  • F Gandon
  • O Boissier
  • A Ricci
  • A Zimmermann
Andrei Ciortea, Simon Mayer, Fabien Gandon, Olivier Boissier, Alessandro Ricci, and Antoine Zimmermann. A Decade in Hindsight: The Missing Bridge Between Multi-Agent Systems and the World Wide Web. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pages 1659-1663. International Foundation for Autonomous Agents and Multiagent Systems, 2019.
The OpenAirInterface 5G New Radio Implementation: Current Status and Roadmap
  • Florian Kaltenberger
  • Raymond Guy De Souza
  • Hongzhi Knopp
  • Wang
Florian Kaltenberger, Guy de Souza, Raymond Knopp, and Hongzhi Wang. The OpenAirInterface 5G New Radio Implementation: Current Status and Roadmap. In WSA 2019; 23rd International ITG Workshop on Smart Antennas, pages 1-5. VDE, 2019.
Challenges and opportunities
  • M Conti
  • A Dehghantanha
  • K Franke
  • S Watson