Thesis

Solving interoperability and performance challenges over heterogeneous IoT networks : DNS-based solutions

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The Internet of Things (IoT) evolved from its theoretical possibility to connect anything and everything to an ever-increasing market of goods and services. Its underlying technologies diversified and IoT now encompasses various communication technologies ranging from short-range technologies as Bluetooth, medium-range technologies such as Zigbee and long-range technologies such as Long Range Wide Area Network.IoT systems are usually built around closed, siloed infrastructures. Developing interoperability between these closed silos is crucial for IoT use-cases such as Smart Cities. Working on this subject at the application level is a first step that directly evolved from current practice regarding data collection and analysis in the context of the development of Big Data. However, building bridges at the network level would enable easier interconnection between infrastructures and facilitate seamless transitions between IoT technologies to improve coverage at low cost.The Domain Name System (DNS) basically developed to translate human-friendly computer host-names on a network into their corresponding IP addresses is a known interoperability facilitator on the Internet. It is one of the oldest systems deployed on the Internet and was developed to support the Internet infrastructure's growth at the end of the 80s. Despite its old age, it remains a core service on the Internet and many changes from its initial specifications are still in progress, as proven by the increasing number of new suggestions to modify its standard.DNS relies on simple principles, but its evolution since its first developments allowed to build complex systems using its many configuration possibilities. This thesis investigates possible improvements to IoT services and infrastructures. Our key problem can be formulated as follow: Can the DNS and its infrastructure serve as a good baseline to support IoT evolution as it accompanied the evolution of the Internet?We address this question with three approaches. We begin by experimenting with a federated roaming model IoT networks exploiting the strengths of the DNS infrastructure and its security extensions to improve interoperability, end-to-end security and optimize back-end communications. Its goal is to propose seamless transitions between networks based on information stored on the DNS infrastructure. We explore the issues behind DNS and application response times, and how to limit its impact on constrained exchanges between end devices and radio gateways studying DNS prefetching scenarios in a city mobility context. Our second subject of interest consists of studying how DNS can be used to develop availability, interoperability and scalability in compression protocols for IoT. Furthermore, we experimented around compression paradigms and traffic minimization by implementing machine learning algorithms onto sensors and monitoring important system parameters, particularly transmission performance and energy efficiency.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... In [33], collections of DoH resolvers are connected to Firefox over various test sessions. The collected traffic is next examined for DoH traffic using temporal characteristics and packet sizes. ...
... [17] RNN, RFC, DTC, LSTM, GRU, They used GBC, KNC, and XGBoost Require a huge amount of labeled data [33] Multi-Layer Perceptron The model's structure is complex, and real-time performance is poor. ...
Article
Full-text available
The Domain Name System (DNS) protocol essentially translates domain names to IP addresses, enabling browsers to load and utilize Internet resources. Despite its major role, DNS is vulnerable to various security loopholes that attackers have continually abused. Therefore, delivering secure DNS traffic has become challenging since attackers use advanced and fast malicious information-stealing approaches. To overcome DNS vulnerabilities, the DNS over HTTPS (DoH) protocol was introduced to improve the security of the DNS protocol by encrypting the DNS traffic and communicating it over a covert network channel. This paper proposes a lightweight, double-stage scheme to identify malicious DoH traffic using a hybrid learning approach. The system comprises two layers. At the first layer, the traffic is examined using random fine trees (RF) and identified as DoH traffic or non-DoH traffic. At the second layer, the DoH traffic is further investigated using Adaboost trees (ADT) and identified as benign DoH or malicious DoH. Specifically, the proposed system is lightweight since it works with the least number of features (using only six out of thirty-three features) selected using principal component analysis (PCA) and minimizes the number of samples produced using a random under-sampling (RUS) approach. The experiential evaluation reported a high-performance system with a predictive accuracy of 99.4% and 100% and a predictive overhead of 0.83 µs and 2.27 µs for layer one and layer two, respectively. Hence, the reported results are superior and surpass existing models, given that our proposed model uses only 18% of the feature set and 17% of the sample set, distributed in balanced classes.
... Machine learning algorithms look for trends in past sensor data stored in a big data warehouse and build prediction models. Control applications that give commands to the drivers of IoT devices use these models [16][17][18]. ...
Article
Full-text available
Nowadays, internet of things (IoT) applications, especially in smart cities, are fast developing. Clustering is a promising solution for handling IoT issues such as energy efficiency, scalability, robustness, mobility, load balancing, and so on. The clustering method, which can be applied in IoT, groups sensor nodes into clusters with one node operating as the cluster head. This paper intends to determine the usage of clustering in IoT as a case study for smart cities. Furthermore, this study discusses clustering algorithms on IoT, open issues, and future challenges of clustering in the context of the smart city, and also existing research papers selected by the systematic literature review technique published between 2017 and 2021. Also, we provide a technical taxonomy for clustering categorization in IoT, which includes algorithm, architecture, and application. According to the statistical analysis of 51 chosen research articles in the domain of clustering in IoT, the results show that the number of clusters has a high percentage of 24%, the energy factor has 23%, the execution time factor has 18%, the accuracy has 14%, the delay has 9%, the lifetime has 6%, and throughput has 6%.
Technical Report
Full-text available
This document defines the Static Context Header Compression and fragmentation (SCHC) framework, which provides both a header compression mechanism and an optional fragmentation mechanism. SCHC has been designed with Low-Power Wide Area Networks (LPWANs) in mind. SCHC compression is based on a common static context stored both in the LPWAN device and in the network infrastructure side. This document defines a generic header compression mechanism and its application to compress IPv6/UDP headers. This document also specifies an optional fragmentation and reassembly mechanism. It can be used to support the IPv6 MTU requirement over the LPWAN technologies. Fragmentation is needed for IPv6 datagrams that, after SCHC compression or when such compression was not possible, still exceed the Layer 2 maximum payload size. The SCHC header compression and fragmentation mechanisms are independent of the specific LPWAN technology over which they are used. This document defines generic functionalities and offers flexibility with regard to parameter settings and mechanism choices. This document standardizes the exchange over the LPWAN between two SCHC entities. Settings and choices specific to a technology or a product are expected to be grouped into profiles, which are specified in other documents. Data models for the context and profiles are out of scope.
Chapter
Full-text available
Vehicle-to-vehicle (V2V) communication is an advance application and thrust area of research. In the current research, the authors highlighted the technologies which are used in V2V communication systems. Advantage of such technology is that it helps to detect live location and tolling. It plays an important role if there are huge amount of traffic. The current research work can obtain more information about Li-Fi, RFID, VANET, and LORAWAN technology. Li-Fi is known as VLC communication system that uses visible light for high data transmission and reception. RFID technology helps the emergency vehicle to reach destination quickly by avoiding any kind of traffic. LORAWAN is a large-scale network technology with a long range and VANET with low power that allows to obtain accurate traffic information on each route and this saves time. The comparison between the different technologies is reviewed in order to obtain the optimized technology as per the applications.
Article
Full-text available
A Large-Scale Heterogeneous Network (LS-HetNet) integrates different networks into one uniform network system to provide seamless one-world network coverage. In LS-HetNet, various devices use different technologies to access heterogeneous networks and generate a large amount of data. For dealing with a large number of access requirements, these data are usually stored in the HetNet Domain Management Server (HDMS) of the current domain, and HDMS uses a centralized Authentication/Authorization/Auditing (AAA) scheme to protect the data. However, this centralized method easily causes the data to be modified or disclosed. To address this issue, we propose a blockchain-empowered AAA scheme for accessing data of LS-HetNet. Firstly, the account address of blockchain is used as the identity authentication, and the access control permission of data is redesigned and stored on the blockchain, then processes of AAA are redefined. Finally, the experimental model on Ethereum private chain is built, and the results show that the scheme is not only secure but also decentral, without tampering and trustworthiness.
Article
Full-text available
In a growing population context with less resources of soil and water, the irrigated agriculture allows us to increase the yield and the production of several crops in order to meet the high requirements of demands of food and fibers. Efficiently, an irrigation system should correctly evaluate the amount of water and also the timing, when apply certain irrigation doses. Indeed, monitoring systems are crucial in some areas of the planet suffering for a luck of water therefore where the environmental conditions are harsh to ensures an efficient crop growth. Moreover, plant diseases and pests impact the yields of crops, an early detection gives us the opportunity to treat the disease or pest quickly and effectively in order to reduce the impact of these latter. In this paper, we propose an integrate approach which optimize the water use, the supply of fertilizers, the treatment of plant diseases, and pests with a center-pivot equipped with camera by means of IoT and AI algorithms.
Article
Full-text available
Despite the latest research efforts to foster mobility and roaming in heterogeneous Low Power Wide Area Networks (LP-WANs) networks, handover roaming of Internet of Things (IoT) devices is not a success mainly due to fragmentation and difficulties to establish trust across different network domains as well as the lack of interoperability of different LP-WANs wireless protocols. To cope with this issue, this paper proposes a novel handover roaming mechanism for Low Range Wide Area Network (LoRaWAN) protocol that relies on the trusted 5G network to perform IoT device’s authentication and key management, thereby extending the mobility and roaming capabilities of LoRaWAN to global scale. The proposal enables interoperability between 5G network and LoRaWAN, whereby multi Radio Access Technologies IoT (multi-RAT IoT) devices can exploit both technologies interchangeably, thereby fostering novel IoT mobility and roaming use cases for LP-WANs not experimented so far. Two integration approaches for LoRaWAN and 5G have been proposed, either assuming 5G spectrum connectivity with standard 5G authentication or performing 5G authentication over the LoRaWAN network. The solution has been deployed, implemented and validated in a real and integrated 5G-LoRaWAN testbed, showing its feasibility and security viability.
Article
Full-text available
The Internet of Things (IoT) applications have grown in exorbitant numbers, generating a large amount of data required for intelligent data processing. However, the varying IoT infrastructures (i.e., cloud, edge, fog) and the limitations of the IoT application layer protocols in transmitting/receiving messages become the barriers in creating intelligent IoT applications. These barriers prevent current intelligent IoT applications to adaptively learn from other IoT applications. In this paper, we critically review how IoT-generated data are processed for machine learning analysis and highlight the current challenges in furthering intelligent solutions in the IoT environment. Furthermore, we propose a framework to enable IoT applications to adaptively learn from other IoT applications and present a case study in how the framework can be applied to the real studies in the literature. Finally, we discuss the key factors that have an impact on future intelligent applications for the IoT.
Article
Full-text available
To process continuous sensor data in Internet of Things (IoT) environments, this study optimizes queries using multiple MJoin operators. To achieve efficient storage management, it classifies and reduces data using a support vector machine (SVM) classification algorithm. A global shared query execution technique was used to optimize multiple MJoin queries. By comparing each kernel function of the SVM classification algorithm, the system’s performance was evaluated through experiments according to the selected optimal kernel function and changes in sliding window size. Furthermore, to implement a smart home system that can actively respond to users, classified and reduced sensor data were utilized to enable the intelligent control of devices inside the home. The sensor data (e.g., temperature, humidity, gas) used to recognize the current conditions of an IoT-based smart home system and corresponding date data were classified into decision trees, and the system was designed using five sensors to intelligently control priorities such as ventilation, temperature, and fire and intrusion detection. The experiments demonstrated that the multiple MJoin technique yields high improvements in performance with relatively few searches. In this study, the sigmoid was selected as the optimal kernel function for the SVM classification algorithm. According to the SVM classification algorithm results, based on changes in the sliding window size, the average error rate was 2.42%, the reduction result was 17.58%, and the classification accuracy was 85.94%. According to the comparison of the classification performance of SVM and other algorithms, the SVM classification algorithm exhibited a minimum 9% better classification performance. Thus, compared to existing home systems, this algorithm is expected to increase system efficiency and convenience by enabling the configuration of a more intelligent environment according to the user’s characteristics or requirements.
Article
Full-text available
Many IoT (Internet of Things) applications, like the industrial internet and the smart city, collect data continuously from massive sensors. It is crucial to exploit and analyze the time series data efficiently. Subsequence matching is a fundamental task in mining time series data. Most existing woks develop the index and the matching approach for the static time series data. However, IoT applications need to continuous collect new data and deposit huge historical time series data, which pose a significant challenge for the static indexing approach. To address this challenge, we propose a lightweight index structure, L-index, and a matching approach, L-match, for the constraint normalized subsequence matching problem (cNSM). L-index is a two-layer structure and built on the simple series synopsis, the mean values of the disjoint windows. It is easy to build and update as data grows. Moreover, to further improve the efficiency for the variable query lengths, an optimization technique, named SD-pruning, is proposed. We conduct extensive experiments, and the results verify the effectiveness and efficiency of the proposed approach.
Article
Full-text available
With the advent of IoT, storing, indexing and querying XML data efficiently is critical. To minimize the cost of querying XML data, researchers have proposed many indexing techniques. Nearly all the techniques, partition the XML data into a number of data-streams. To evaluate a query, existing twig pattern matching algorithms process a subset of the data-streams simultaneously. Processing many datastreams simultaneously results in some or all of the following four problems, namely, the accessing of many data nodes which don’t appear in the final solution of a given query, the generation of duplicate results, the generation of huge number of intermediate results, and the cost of merging the generated intermediate results. To the best of our knowledge, all the existing twig pattern matching algorithms suffer from some or all of the above mentioned problems. This paper proposes a new twig pattern matching algorithm called MatchQTP which processes one data-stream at a time and avoids all the above mentioned four problems. It also proposes a new indexing technique called RLP-Index and a new XML node labeling scheme called RLP-Scheme, both of which are used by MatchQTP. Unlike the existing indexing techniques, RLP-Index stores a subset of the data nodes. The rest of the data nodes can be generated efficiently. This minimizes storage space utilization and query processing time and makes RLP-Index the first of its kind. Many experiments were conducted to study the performance of MatchQTP. The results show that MatchQTP is very efficient and highly scalable. It was also compared with four algorithms, three of which are used frequently in the literature to compare the performance of new algorithms and the fourth algorithm is the state-of-the-art algorithm. MatchQTP significantly and consistently outperformed all of them.
Article
Full-text available
Artificial intelligence (AI) and Internet of things (IoT) have progressively emerged in all domains of our daily lives. Nowadays, both domains are combined in what is called artificial intelligence of thing (AIoT). With the increase of the amount of data produced by the myriad of connected things and the large wide of data needed to train Artificial Intelligence models, data processing and storage became a real challenge. Indeed, the amount of data to process, to transfer by network and to treat in the cloud have call into question classical data storage and processing architectures. Also, the large amount of data generated at the edge has increased the speed of data transportation that is becoming the bottleneck for the cloud-based computing paradigms. The post-cloud approaches using Edge computing allow to improve latency and jitter. These infrastructures manage mechanisms of containerization and orchestration in order to provide automatic and fast deployment and migration of services such as reasoning mechanisms, ontology, and specifically adapted artificial intelligence algorithms. In this paper we propose a new architecture used to deploy at edge level micro services and adapted artificial intelligence algorithms and models.
Article
Full-text available
The growing number of connected Internet of Things (IoT) devices has increased the necessity for processing IoT data from multiple heterogeneous data stores. IoT data integration is a challenging problem owing to the heterogeneity of data stores in terms of their query language, data models, and schemas. In this paper, we propose a multi-store query system for IoT data called MusQ, where users can formulate join operation queries for heterogeneous data sources. To reconcile the heterogeneity between source schemas of IoT data stores, we extract a global schema from local source schemas semi-automatically by applying schema-matching and schema-mapping steps. In order to minimize the burden on the user to understand the finer details of various query languages, we define a unified query language called the multi-store query language (MQL), which follows a subset of the Datalog grammar. Thus, users can easily retrieve IoT data from multiple heterogeneous sources with MQL queries. As the three MQL query-processing join algorithms are based on a mediator–wrapper approach, MusQ performs efficient data integration over significant volumes of IoT data from multiple stores. We conduct extensive experiments to evaluate the performance of the MusQ system using a synthetic and large real IoT data set for three different types of data stores (RDBMS, NoSQL, and HDFS). The experimental results demonstrate that MusQ is suitable, scalable, and efficient query processing for multiple heterogeneous IoT data stores. Those advantages of MusQ are important in several areas that involve complex IoT systems, such as smart city, healthcare, and energy management.
Article
Full-text available
With the increasing and diversified Internet of Things (IoT) devices, more IoT heterogeneous wireless networks have emerged, providing more network services for IoT devices, especially mobile phones and other roaming devices. However, there are also some malicious users who use different means to attack the security of the network, so that more users begin to pay attention to the identity authentication and privacy protection of the Internet of Things. This paper designs an IoT node roaming authentication model, which is used to enhance the security authentication capability of the Internet of Things to roaming devices. In order to effectively prevent malicious nodes from connecting to the network, this paper proposes a roaming authentication protocol based on heterogeneous fusion mechanism (HFM-IoT). The authentication protocol uses the remote authentication server in the local and remote areas to perform interactive authentication on the roaming device, which increases the difficulty of attacking or infecting multiple network areas by malicious nodes. According to the security analysis, the protocol can protect against multiple network attacks, and it can be seen from the experimental simulation results that the protocol has lower energy burden and authentication delay.
Article
Full-text available
The Internet of Things (IoT) has become the infrastructure to widely support ubiquitous applications. Due to the highly dynamic context setting, IoT search engines have attracted increasing attention from both industrial and academic field to crawl and search heterogeneous data sources. Today, a large amount of work on IoT search engines is devoted to finding a particular mobile object device, or a group of object devices satisfying the constraint on query terms description. However, it still lacks studies on enabling so-called spatial-temporal-keyword-aware query. Only a few research work simply applies a keyword or spatial-temporal matching to identify object devices. In this case, it is insufficient to simultaneously consider the spatial-temporal-keyword aspect in order to satisfy the user request. To address this challenge, we develop a new search mechanism over PKR-tree (denoted SMPKR), in which PKR-tree unifiedly integrates spatial-temporal-keyword proximity with the help of a coding enabled index. Efficient algorithms are developed for answering range and (enhanced) KNN queries. Extensive experimental results demonstrate that our SMPKR search engine promotes the efficiency of searching for object devices with spatial-temporal-keyword constraints in comparison with the state of arts.
Article
Smart cities fully utilize the new generation of Internet of Things (IoT) technology in the process of urban informatization to optimize the urban management and service. However, in the IoT system, while information exchange and communication, wireless sensor network devices may not be able to resist all forms of attacks, which may lead to security issues such as user data disclosure. Aiming at the information security risks in smart city, the typical technologies in IoT is analyzed from the perspective of IoT perception layer and provides corresponding security solutions for the existing security threats. Regarding the communication security, the emerging wireless technology, long range (LoRa), is discussed, and the performance of wireless communication protocol is analyzed through simulation experiments, to verify that the IoT technology based on LoRa communication technology can improve the security of the system in the construction of smart city. The results show that REBEB, a new backoff algorithm, is similar to the binary exponential backoff algorithm in terms of throughput performance. REBEB focuses more on fairness, which is up to 0.985, and to a certain extent, its security is significantly improved. The fairness of REBEB algorithm is more than 0.4 in different nodes and competing windows, and the fairness of the system is better when the number of nodes is small. To sum up, the IoT system based on LoRa communication can effectively improve the security performance of the system in the construction of smart city and avoid the security threats in the IoT signal transmission.
Article
As deep learning, virtual reality, and other technologies become mature, real-time data processing applications running on intelligent terminals are emerging endlessly; meanwhile, edge computing has developed rapidly and has become a popular research direction in the field of distributed computing. Edge computing network is a network computing environment composed of multi-edge computing nodes and data centers. First, the edge computing framework and key technologies are analyzed to improve the performance of real-time data processing applications. In the system scenario where the collaborative deployment tasks of multi-edge nodes and data centers are considered, the stream processing task deployment process is formally described, and an efficient multi-edge node-computing center collaborative task deployment algorithm is proposed, which solves the problem of copy-free task deployment in the task deployment problem. Furthermore, a heterogeneous edge collaborative storage mechanism with tight coupling of computing and data is proposed, which solves the contradiction between the limited computing and storage capabilities of data and intelligent terminals, thereby improving the performance of data processing applications. Here, a Feasible Solution (FS) algorithm is designed to solve the problem of placing copy-free data processing tasks in the system. The FS algorithm has excellent results once considering the overall coordination. Under light load, the V value is reduced by 73% compared to the Only Data Center-available (ODC) algorithm and 41% compared to the Hash algorithm. Under heavy load, the V value is reduced by 66% compared to the ODC algorithm and 35% compared to the Hash algorithm. The algorithm has achieved good results after considering the overall coordination and cooperation and can more effectively use the bandwidth of edge nodes to transmit and process data stream, so that more tasks can be deployed in edge computing nodes, thereby saving time for data transmission to the data centers. The end-to-end collaborative real-time data processing task scheduling mechanism proposed here can effectively avoid the disadvantages of long waiting times and unable to obtain the required data, which significantly improves the success rate of the task and thus ensures the performance of real-time data processing.
Book
This book provides the reader with the most up-to-date knowledge of blockchain in mainstream areas of security, trust, and privacy in the decentralized domain, which is timely and essential (this is due to the fact that the distributed and P2P applications is increasing day-by-day, and the attackers adopt new mechanisms to threaten the security and privacy of the users in those environments). This book also provides the technical information regarding blockchain-oriented software, applications, and tools required for the researcher and developer experts in both computing and software engineering to provide solutions and automated systems against current security, trust and privacy issues in the cyberspace. Cybersecurity, trust and privacy (CTP) are pressing needs for governments, businesses, and individuals, receiving the utmost priority for enforcement and improvement in almost any societies around the globe. Rapid advances, on the other hand, are being made in emerging blockchain technology with broadly diverse applications that promise to better meet business and individual needs. Blockchain as a promising infrastructural technology seems to have the potential to be leveraged in different aspects of cybersecurity promoting decentralized cyberinfrastructure. Blockchain characteristics such as decentralization, verifiability and immutability may revolve current cybersecurity mechanisms for ensuring the authenticity, reliability, and integrity of data. Almost any article on the blockchain points out that the cybersecurity (and its derivatives) could be revitalized if it is supported by blockchain technology. Yet, little is known about factors related to decisions to adopt this technology, and how it can systemically be put into use to remedy current CTP’s issues in the digital world. This book provides information for security and privacy experts in all the areas of blockchain, cryptocurrency, cybersecurity, forensics, smart contracts, computer systems, computer networks, software engineering, applied artificial intelligence for computer security experts, big data analysts, and decentralized systems. Researchers, scientists and advanced level students working in computer systems, computer networks, artificial intelligence, big data will find this book useful as well.
Article
As Internet-of-Things (IoT) networks provide efficient ways to transfer data, they are used widely in data sensing applications. These applications can further include wireless sensor networks. One of the critical problems in sensor-equipped IoT networks is to design energy efficient data aggregation algorithms that address the issues of maximum value and distinct set query. In this paper, we propose an algorithm based on uniform sampling and Bernoulli sampling to address these issues. We have provided logical proofs to show that the proposed algorithms return accurate results with a given probability. Simulation results show that these algorithms have high performance compared with a simple distributed algorithm in terms of energy consumption.
Article
Extracting the valuable features and information in big data has become one of the important research issues in data science. In most Internet of Things (IoT) applications, the collected data are uncertain and imprecise due to sensor device variations or transmission errors. In addition, the sensing data may change as time evolves. We refer an uncertain data stream as a dataset that has velocity, veracity, and volume properties simultaneously. This paper employs the parallelism in edge computing environments to facilitate the top-k dominating query process over multiple uncertain IoT data streams. The challenges of this problem include how to quickly update the result for processing uncertainty and reduce the computation cost as well as provide highly accurate results. By referring to the related existing papers for certain data, we provide an effective probabilistic top-k dominating query process on uncertain data streams, which can be parallelized easily. After discussing the properties of the proposed approach, we validate our methods through the complexity analysis and extensive simulated experiments. In comparison with the existing works, the experimental results indicate that our method can improve almost 60% computation time, reduce nearly 20% communication cost between servers, and provide highly accurate results in most scenarios.
Conference Paper
Due to the improvement of technology, most of the devices used nowadays are connected to the internet, therefore a huge amount of data is generated, transmitted, and used by these devices. In general, these devices are limited in resources such as memory, processors, and battery lifetime. Reducing the data size reduces the energy required to process this data, minimizes the storage of this, data and the energy required to transmit this data. The need for applying data compression techniques on these devices will come in handy. This paper provides a survey and a comparative study among most commonly used IoT compression techniques. The study addresses the techniques in terms of different attributes such as the compression type, lossless or lossy, the limitations of the compression technique, the location of where the compression is applied, and the implementation of the compression technique.
Conference Paper
All existing models for analyzing the performance of LoRaWAN assume a constant density of nodes within the gateway range. We claim that such a situation is highly unlikely for LoRaWAN cells whose range can attain several kilometers in real-world deployments. We thus propose to analyze the LoRa performance under a more realistic assumption: the density of nodes decreases with the inverse square of the distance to the gateway. We use the LoRaWAN capacity model by Georgiou and Raza to find the Packet Delivery Ratio (PDR) for an inhomogeneous spatial distribution of devices around a gateway and obtain the number of devices that benefit from a given level of PDR. We analyze the LoRaWAN capacity in terms of PDR for various spatial configurations and Spreading Factor allocations.
Article
Social Internet of Things (SIoT) is the integration of social network (SN) and the Internet of Things (IOT). Community search in SIOT is an important problem beneficial to the resource/service discovery. In this paper, we address the problem from the perspective of dense subgraph query. Specifically, we propose a core based static dense subgraph query and a graph kernel based dynamic dense subgraph query. The two algorithms consider the large scale and the time-varying nature of the SIoT, respectively. Unlike the existing works, the static method is inspired by the first-connection-last-expansion idea. Top-k neighbors of the query node are first found by random walk. Then, the query node and its top-k neighbors are connected as the core using Steiner tree expansion. Subsequently, by rank constraint random sampling, we extend the core to a dense subgraph. Further, by leveraging a graph kernel index and the updates that may affect the results, we conduct the dynamic query in an incremental update way instead of executing it from scratch. The experiments on synthetic and real data sets show that the proposed algorithms are both effective and efficient.
Conference Paper
Compressed sensing (CS) is a technique for signal sampling below the Nyquist rate, based on the assumption that the signal is sparse in some transform domain. The acquired signal is represented in a compressed form that is appropriate for storage, transmission and further processing. In this paper, use of the Chebyshev polynomials of the first kind on intervals for an efficient representation of one-dimensional, continuoustime signals is proposed. To avoid boundary artifacts, a desired number of derivatives are equalized on each interval end in a spline-like fashion. Unlike splines, the proposed system of equations is underdetermined to provide a necessary degree of freedom for achieving sparsity using the l1 optimization. The obtained parametric model fits into the compressed sensing setup and offers a new paradigm for efficient processing of analog data on a digital computer. Simulation results of the proposed measurement system and an example of data processing are given to prove its potential.
Article
With the Internet of Things (IoT) becoming the infrastructure to support domain applications, IoT search engines have attracted increasing attention from users, industry, and research community, since they are capable of crawling heterogeneous data sources in highly dynamic environment. IoT search engines have to be able to process tens of thousands of spatial-time-keyword queries per second, making query throughput a critical issue. To achieve this heavy workload, caching mechanisms in collaborative edge-cloud computing architecture, which can implement the caching paradigm in cloud for frequent n-hop neighbor activity regions, is first proposed in this paper. With our design, the frequent query result can be gained quickly from the spatial-time-keyword filtering index of n-hop neighbor regions by modeling keywords relevance and uncertain traveling time. In addition, we use STK-tree proposed previously to directly answer non-frequent queries. Extensive experiments on real-life and synthetic datasets demonstrate our proposed method outperforms the state-of-the-art approaches with respect to query time and message number.
Article
The Industrial Internet of Things (IIoT) enables intelligent industrial operations by incorporating artificial intelligence (AI) and big data technologies. An AI-enabled framework typically requires prompt and private cloud-based service to process and aggregate manufacturing data. Thus, integrating intelligence into edge computing is without doubt a promising development trend. Nevertheless, edge intelligence brings heterogeneity to the edge servers, in terms of not only computing capability, but also service accuracy. Most works on offloading in edge computing focus on finding the power-delay trade-off, ignoring service accuracy provided by edge servers as well as the accuracy required by IIoT devices. In this vein, in this article we introduce an intelligent computing architecture with cooperative edge and cloud computing for IIoT. Based on the computing architecture, an AI enhanced offloading framework is proposed for service accuracy maximization, which considers service accuracy as a new metric besides delay, and intelligently disseminates the traffic to edge servers or through an appropriate path to remote cloud. A case study is performed on transfer learning to show the performance gain of the proposed framework.