Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Edge-of-Things (EoT) enables the seamless transfer of services, storage and data processing from the Cloud layer to Edge devices in a large-scale distributed Internet of Things (IoT) ecosystems (e.g., Industrial systems). This transition raises the privacy and security concerns in the EoT paradigm distributed at different layers. Intrusion detection systems are implemented in EoT ecosystems to protect the underlying resources from attackers. However, the current intrusion detection systems are not intelligent enough to control the false alarms, which significantly lower the reliability and add to the analysis burden on the intrusion detection systems. In this article, we present a DaaS, Dew Computing as a Service for intelligent intrusion detection in EoT ecosystems. In DaaS, a deep learning-based classifier is used to design an intelligent alarm filtration mechanism. In this mechanism, the filtration accuracy is improved (or sustained) by using deep belief networks. In the past, the cloud-based techniques have been applied for offloading the EoT tasks, which increases the middle layer burden and raises the communication delay. Here, we introduce the dew computing features which are used to design the smart false alarm reduction system. DaaS, when experimented in a simulated environment, reflects lower response time to process the data in the EoT ecosystem. The revamped DBN model achieved the classification accuracy up to 95%. Moreover, it depicts a 60% improvement in the latency and 35% workload reduction of the cloud servers as compared to Edge intrusion detection system.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The proliferation of IoT and its success provides a new horizon of computing called the Edge computing [2], where technologies perform computation at the edge of the network. Edge computing has solved issues of response time, bandwidth cost saving, data safety, and privacy [3] [4]. EoT allows on-device computing and analytics. ...
... recommendation update 10: end if 11: end for 12: return E L the recommendation in Edge List (line no. [3][4][5][6][7][8][9][10][11]. The final Edge List has been returned (line no. ...
Article
Edge of Things (EoT) technology enables end-users participation with smart-sensors and mobile devices (such as smartphones, wearable devices) to the smart devices across the smart city. Trust management is the main challenge in EoT infrastructure to consider the trusted participants. The Quality of Service (QoS) is highly affected by malicious users with fake or altered data. In this paper, a Robust Trust Management (RTM) scheme is designed based on Bayesian learning and collaboration filtering. The proposed RTM model is regularly updated after a specific interval with the significant decay value to the current calculated scores to update the behavior changes quickly. The dynamic characteristics of edge nodes are analyzed with the new probability score mechanism from recent services’ behavior. The performance of the proposed trust management scheme is evaluated in a simulated environment. The percentage of collaboration devices are tuned as 10%, 50% and 100%. The maximum accuracy of 99.8% is achieved from the proposed RTM scheme. The experimental results demonstrate that the RTM scheme shows better performance than the existing techniques in filtering malicious behavior and accuracy.
... A dew computing as a service (DaaS) for improving the performance of intrusion detection in edge of things (EoT) ecosystems has been proposed in [38]. It acts as a cloud in the local environment that collaborates with the public cloud to reduce the communication delay and cloud server workload. ...
... Table 5 shows the results of the 7-class attack detection experiment on the CICIDS2017 dataset for APAE against other algorithms. Note that references [33,[35][36][37][38]43] did not evaluate their works on this dataset, and their source codes are also not publicly available. However, the source code for the MemAE and the NDAE algorithms are publicly available and we used their source codes to obtain the results of this experiment for these algorithms. ...
Article
Full-text available
In recent years, the world has dramatically moved toward using the internet of things (IoT), and the IoT has become a hot research field. Among various aspects of IoT, real-time cyber-threat protection is one of the most crucial elements due to the increasing number of cyber-attacks. However, current IoT devices often offer minimal security features and are vulnerable to cyber-attacks. Therefore, it is crucial to develop tools to detect such attacks in real time. This paper presents a new and intelligent network intrusion detection system named APAE that is based on an asymmetric parallel auto-encoder and is able to detect various attacks in IoT networks. The encoder part of APAE has a lightweight architecture that contains two encoders in parallel, each one having three successive layers of convolutional filters. The first encoder is for extracting local features using standard convolutional layers and a positional attention module. The second encoder also extracts the long-range information using dilated convolutional layers and a channel attention module. The decoder part of APAE is different from its encoder and has eight successive transposed convolution layers. The proposed APAE approach has a lightweight and suitable architecture for real-time attack detection and provides very good generalization performance even after training using very limited training records. The efficacy of the APAE has been evaluated using three popular public datasets named UNSW-NB15, CICIDS2017, and KDDCup99, and the results showed the superiority of the proposed model over the state-of-the-art algorithms.
... a Dew Computing as a Service (DaaS) for improving the performance of intrusion detection in Edge of Things (EoT) ecosystems has been proposed in [30]. It acts as a cloud in the local environment that collaborates with the public cloud to reduce the communication delay and cloud server workload. ...
... Its overall accuracy is about 1% and 2% higher than the APAE and MemAE, respectively. It is also far better than the overall accuracy of the references [30] and [39]. ...
Article
Full-text available
In recent years, the Internet of Things (IoT) has received a lot of attention. It has been used in many applications such as the control industry, industrial plants, and medicine. In this regard, a fundamental necessity is to implement security in IoT. To this end, Network intrusion detection systems (NIDSs) have been recently in the detection of network attacks and threats. Currently, these systems use a variety of deep learning (DL) models such as the convolutional neural networks to improve the detection of attacks. However, almost all current DL-based NIDSs are made up of many layers, and therefore, they need a lot of processing resources because of their high number of parameters. On the other hand, due to the lack of processing resources, such inefficient DL models are unusable in IoT devices. This paper presents a very accurate NIDS that is named DFE, and it uses a very lightweight and efficient neural network based on the idea of deep feature extraction. In this model, the input vector of the network is permuted in a 3D space, and its individual values are brought close together. This allows the model to extract highly discriminative features using a small number of layers without the need to use large 2D or 3D convolution filters. As a result, the network can achieve an accurate classification using a significantly small number of required calculations. This makes the DFE ideal for real-time intrusion detection by IoT devices with limited processing capabilities. The efficacy of the DFE has been evaluated using three popular public datasets named UNSW-NB15, CICIDS2017, and KDDCup99, and the results show the superiority of the proposed model over the state-of-the-art algorithms
... These devices produce an enormous amount of data, which are further processed from the IoT devices, besides the communication with higher-level servers in the cloud. Dew-assisted computing for the newest generation of robotics applications is discussed in [13] and dew computing as a Service (DaaS) for intelligent intrusion detection in Edge of Things (EoT) ecosystems is described in [14]. For none of these applications [11][12][13][14], corresponding security aspects have been addressed. ...
... Dew-assisted computing for the newest generation of robotics applications is discussed in [13] and dew computing as a Service (DaaS) for intelligent intrusion detection in Edge of Things (EoT) ecosystems is described in [14]. For none of these applications [11][12][13][14], corresponding security aspects have been addressed. ...
Article
Full-text available
Dew computing is complementing fog and cloud computing by offering the first layer of connection for any IoT device in the field. Typically, data are stored locally in the dew servers in cases when for instance Internet is not available. Therefore, dedicated authentication and key agreement protocols need to be developed in order to guarantee secure communication without the online presence of a trusted third party. First, a complete and clear presentation on the attack model and the required security features for dew computing scenarios are provided. Next, the relation with client-server security schemes is explained and two particular criteria are identified that need to be addressed in these schemes in order to serve as security scheme for dew computing. It is shown how a recently published client-server authentication scheme, satisfying these two criteria, can be extended with a key agreement feature, resulting in a very efficient authentication and key agreement scheme for a dew computing scenario. The obtained scheme outperforms from a security point of view the currently available alternatives and behaves in a similar line with respect to computational and communication efforts. More in particular, severe security vulnerabilities are demonstrated for a recently proposed dedicated dew computing authentication and key agreement protocol.
... Data leakage is one of the most relevant issues in internet-based healthcare applications. Aujla et al. [7] controlled the deduplication of big data in the cloud environment using a Merkle hash-tree, whereas Singh et al. [61] proposed a intrusion detection system using dew computing as a service. Kaur et al. [34] provided a solution for the problem of dimensionality reduction for big data on the smart grid. ...
Article
Full-text available
Healthcare organizations and Health Monitoring Systems generate large volumes of complex data, which offer the opportunity for innovative investigations in medical decision making. In this paper, we propose a beetle swarm optimization and adaptive neuro-fuzzy inference system (BSO-ANFIS) model for heart disease and multi-disease diagnosis. The main components of our analytics pipeline are the modified crow search algorithm, used for feature extraction, and an ANFIS classification model whose parameters are optimized by means of a BSO algorithm. The accuracy achieved in heart disease detection is \(99.1\%\) with \(99.37\%\) precision. In multi-disease classification, the accuracy achieved is \(96.08\%\) with \(98.63\%\) precision. The results from both tasks prove the comparative advantage of the proposed BSO-ANFIS algorithm over the competitor models.
... There is no standard metric in the literature. Some metrics consider the GIPS [7], the number of jobs completed [31], the network latency [63], or the battery levels [64]. However, they all have advantages and disadvantages. ...
Article
Full-text available
Due to mobile and IoT devices’ ubiquity and their ever-growing processing potential, Dew computing environments have been emerging topics for researchers. These environments allow resource-constrained devices to contribute computing power to others in a local network. One major challenge in these environments is task scheduling: that is, how to distribute jobs across devices available in the network. In this paper, we propose to distribute jobs in Dew environments using artificial intelligence (AI). Specifically, we show that an AI agent, known as Proximal Policy Optimization (PPO), can learn to distribute jobs in a simulated Dew environment better than existing methods—even when tested over job sequences that are five times longer than the sequences used during the training. We found that using our technique, we can gain up to 77% in performance compared with using human-designed heuristics.
... Dew Computing (DC) is another emerging technology which draws tremendous attention from both academia and industry. DC enables cloud computing facilities with minimal or no connection to the internet [3]. DC can assist drones in remote areas to main-tain computation and data until it comes in contact with a server. ...
Conference Paper
Pandemics are very burdensome for every country in terms of health and economy. Technology is needed to come forward to lessen the burden. As finding cure instantly is not always possible, prevention method can be adopted to reduce the damages. Among the methods, home quarantine (HQ) can be beneficial. However, maintaining HQ is very difficult, especially in remote areas where network connectivity is very limited. Drones, dew computing, and blockchain are emerging technologies which can assist in reducing damages. This paper presents a blockchain enabled home quarantine supervision scheme in which patients are monitored using drones. A lightweight computation environment is established to assist the drone. A proof of concept is established to demonstrate the feasibility.
... II. REPRESENTATION TECHNIQUES In this technique, the primary objective of the path planning process is to represent UAVs in a real-time situation [8]. To achieve this there are two commonly used techniques, these are sampling-based and Artificial Based. ...
... In the proposed FBI, drones are employed in the dataaccumulation process, in which IoT devices train models using local data, and these models are stored in the blockchain. Although IoT devices share only the parameters of a model, 2 Dew computing is a new computing technique in which the server remains within the device, providing an offline computing environment [9]. the server may retrieve the sender's data using the model information. ...
Article
This letter presents a federated learning-based data-accumulation scheme that combines drones and blockchain for remote regions where Internet of Things devices face network scarcity and potential cyber threats. The scheme contains a two-phase authentication mechanism in which requests are first validated using a cuckoo filter, followed by a timestamp nonce. Secure accumulation is achieved by validating models using a Hampel filter and loss checks. To increase the privacy of the model, differential privacy is employed before sharing. Finally, the model is stored in the blockchain after consent is obtained from mining nodes. Experiments are performed in a proper environment, and the results confirm the feasibility of the proposed scheme.
Chapter
This chapter mainly presents a detailed discussion of the IoT technologies and dependent systems with the main objectives of emphasizing the main attributes of IoT systems that might possibly threaten the security of the system. Firstly, the definition and of the IoT system and the detailed description of its architecture are presented along with a taxonomy for dividing its architecture into layers with different complementary roles. Secondly, the concepts of cloud computing, fog computing, and edge computing are discussed and compared in view of IoT systems. Finally, the learned lessons are summarized and pointed out in the last section of this chapter.
Article
Today, incorporating advanced machine learning techniques into intrusion detection systems (IDSs) plays a crucial role in securing mobile edge computing systems. However, the mobility demands of our modern society require more advanced IDSs to make a good trade-off between coping with the rapid growth of traffic data and responding to attacks. Thus, in this paper, we propose a lightweight distributed IDS that exploits the advantages of centralized platforms to train and learn from large amounts of data. We investigate the benefits of two promising bio-inspired optimization algorithms, namely Ant Lion Optimization and Ant Colony Optimization, to find the optimal data representation for the classification process. We use Deep Forest as a classifier to detect intrusive actions more robustly and generate as few false positives as possible. The experiment results show that the proposed approach can enhance the reliability of lightweight intrusion detection systems in terms of accuracy and execution time.
Article
Security is the primary concern in any IoT application or network. Due to the rapid increase in the usage of IoT devices, data privacy becomes one of the most challenging issue to the researcher. In IoT applications, such as health care, smart homes or any wearables, transmission of human's personal data is more frequent. Man-in-the-Middle attack is one in which outsiders eavesdrops the communication between two trusted parties and steal the important information such as password, personal identification number, etc., and misuse it. So, this paper proposes a Regression Modelling technique to detect and mitigate the attack to provide attack-free path from source to destination in an IoT network. Three machine learning techniques Linear Regression (LR), Multi-variate Linear Regression (MLR) and Gaussian Process Regression (GPR) used and performance of these three algorithms analyzed on various metrics and shown Gaussian Process Regression provide higher rate for detecting the attacks and produces the lower rate for misclassification of attacks.
Chapter
The life cycle of a network system usually includes four stages: demand investigation, planning and design, deployment and implementation, and operation and maintenance. Based on this cycle, a huge network architecture has now been formed, which has played an important role in promoting economic and social development. However, with the vigorous rise of technologies such as big data, cloud computing, Internet of Things, and mobile Internet, Internet applications are becoming increasingly diversified and business volume is increasing. Therefore, the current network architecture is gradually unable to meet the demand, and the existing problems are becoming increasingly prominent. In general, the core problem is that there is a contradiction between the diverse and changeable network upper-layer applications and business requirements and the current stable and rigid traditional network architecture. In order to meet a specific application requirement, it usually needs to include a large number of hardware devices. However, a noteworthy problem is that network devices produced by different manufacturers usually require different ways to debug and configure. Therefore, in a network that mixes equipment from multiple different vendors, managing and deploying the network is a very big challenge. Moreover, the inability to perform intelligent flow control and visualized network status supervision based on network conditions is also a problem that hinders further development. Based on the above problems, software-defined networking (SDN) is a better solution. In general, SDN has the following three advantages: (1) SDN can change the tightly coupled architecture of applications and networks under traditional networks and improve the level of network resource pooling; (2) SDN networks can realize automatic network deployment and configuration, and support rapid business launch and flexible expansion; (3) By introducing programmable features, automated network services and protocol scheduling can be realized. However, the architecture still has some challenges worth considering, such as: (1) Challenges faced by interface/protocol standardization. At present, the control architecture system of the SDN centralized control concept is not unified, and it is difficult to achieve mutual operation due to the different degrees of vendors’ support for the SDN standard. (2) Security challenges. The core controller of the SDN network may have security problems such as excessive load, single point failure, and vulnerability to network attacks. Therefore, it is necessary to establish a reasonable mechanism to ensure the safe and stable operation of the entire system. (3) Challenges in performance. The existing ASIC chip architecture is based on the traditional IP or Ethernet addressing and forwarding design. Therefore, whether the equipment under the SDN architecture can maintain the theoretical high performance remains to be discussed. To sum up, this chapter will start from the analysis and comparison of the traditional network architecture and the SDN network architecture, summarize the problems in the traditional architecture and the necessity of the development of the SDN architecture, and further analyze the application scenarios and the existence of the SDN architecture challenge.
Article
Full-text available
In today’s interconnected society, cyberattacks have become more frequent and sophisticated, and existing intrusion detection systems may not be adequate in the complex cyberthreat landscape. For instance, existing intrusion detection systems may have overfitting, low classification accuracy, and high false positive rate (FPR) when faced with significantly large volume and variety of network data. An intrusion detection approach based on improved deep belief network (DBN) is proposed in this paper to mitigate the above problems, where the dataset is processed by probabilistic mass function (PMF) encoding and Min-Max normalization method to simplify the data preprocessing. Furthermore, a combined sparsity penalty term based on Kullback-Leibler (KL) divergence and non-mean Gaussian distribution is introduced in the likelihood function of the unsupervised training phase of DBN, and sparse constraints retrieve the sparse distribution of the dataset, thus avoiding the problem of feature homogeneity and overfitting. Finally, simulation experiments are performed on the NSL-KDD and UNSW-NB15 public datasets. The proposed method achieves 96.17% and 86.49% accuracy, respectively. Experimental results show that compared with the state-of-the-art methods, the proposed method achieves significant improvement in classification accuracy and FPR.
Article
Full-text available
A false alarm rate of online anomaly-based intrusion detection system is a crucial concern. It is challenging to implement in the real-world scenarios when these anomalies occur sporadically. The existing intrusion detection system has been developed to limit or decrease the false alarm rate. However, the state-of-the-art approaches are attack or algorithm specific, which is not generic. In this article, a soft-computing-based approach has been designed to reduce the false-positive rate for hierarchical data of anomaly-based intrusion detection system. The recurrent neural network model is applied to classify the data set of intrusion detection system and normal instances for various subclasses. The designed approach is more practical, reason being, it does not require any assumption or knowledge of the data set structure. Experimental evaluation is conducted on various attacks on KDDCup’99 and NSL-KDD data sets. The proposed method enhances the intrusion detection systems that can work with data with dependent and independent features. Furthermore, this approach is also beneficial for real-life scenarios with a low occurrence of attacks.
Article
Full-text available
A centralized infrastructure system carries out existing data analytics and decision-making processes from our current highly virtualized platform of wireless networks and the Internet of Things (IoT) applications. There is a high possibility that these existing methods will encounter more challenges and issuesin relation to network dynamics, resulting in a high overhead in the network response time, leading to latency and traffic. In order to avoid these problems in the network and achieve an optimum level of resourceutilization, a new paradigm called edge computing (EC) isproposedto pave the way for the evolution of new age applications and services.With the integration of EC, the processing capabilities are pushed to the edge of network devices such as smart phones, sensor nodes, wearables and on-board units, where data analytics and knowledge generation are performed whichremoves the necessity for a centralized system. Many IoT applications, such as smart cities, the smart grid, smart traffic lights and smart vehicles, are rapidlyupgrading their applications with EC,significantlyimproving response time as well asconserving network resources.Irrespective of the fact that EC shifts the workload from a centralized cloud to the edge, the analogy betweenEC and the cloud pertaining to factors such as resource management and computation optimization are still opento research studies. Hence, this paper aims to validate the efficiency and resourcefulness of EC.We extensively survey the edge systems and present a comparative study of cloud computing systems. After analyzing the different network properties in the system, the results show that EC systems perform better than cloud computing systems. Finally, the research challenges in implementing an edge computing system and future research directions are discussed.
Conference Paper
Full-text available
Understanding visual input as perceived by humans is a challenging task for machines. Today, most successful methods work by learning features from static images. Based on classical artificial neural networks, those methods are not adapted to process event streams as provided by the Dynamic Vision Sensor (DVS). Recently, an unsupervised learning rule to train Spiking Restricted Boltzmann Machines has been presented [9]. Relying on synaptic plasticity, it can learn features directly from event streams. In this paper, we extend this method by adding convolutions, lateral inhibitions and multiple layers. We evaluate our method on a self-recorded DVS dataset as well as the Poker-DVS dataset. Our results show that our convolutional method performs better and needs less parameters. It also achieves comparable results to previous event-based classification methods while learning features in an unsupervised fashion.
Article
Full-text available
This paper addresses the mapping problem. Using a conjugate prior form, we derive the exact theoretical batch multiobject posterior density of the map given a set of measurements. The landmarks in the map are modeled as extended objects, and the measurements are described as a Poisson process, conditioned on the map. We use a Poisson process prior on the map and prove that the posterior distribution is a hybrid Poisson, multi-Bernoulli mixture distribution. We devise a Gibbs sampling algorithm to sample from the batch multi-object posterior. The proposed method can handle uncertainties in the data associations and the cardinality of the set of landmarks, and is parallelizable, making it suitable for large-scale problems. The performance of the proposed method is evaluated on synthetic data and is shown to outperform a state-of-the-art method.
Article
Full-text available
Dew computing is an emerging new research area and has great potentials in applications. In this paper, we propose a revised definition of dew computing. The new definition is: Dew computing is an on-premises computer software-hardware organization paradigm in the cloud computing environment where the on-premises computer provides functionality that is independent of cloud services and is also collaborative with cloud services. The goal of dew computing is to fully realize the potentials of on-premises computers and cloud services. This definition emphasizes two key features of dew computing: independence and collaboration. Furthermore, we propose a group of dew computing categories. These categories may inspire new applications. The full text of this article can be obtained in the following URL: https://www.ronpub.com/publications/ojcc/OJCC_2016v3i1n02_YingweiWang.html
Article
Full-text available
Cloud, fog and dew computing concepts offer elastic resources that can serve scalable services. These resources can be scaled horizontally or vertically. The former is more powerful, which increases the number of same machines (scaled out) to retain the performance of the service. However, this scaling is tightly connected with the existence of a balancer in front of the scaled resources that will balance the load among the end points. In this paper, we present a successful implementation of a scalable low-level load balancer, implemented on the network layer. The scalability is tested by a series of experiments for a small scale servers providing services in the range of dew computing services. The experiments showed that it adds small latency of several milliseconds and thus it slightly reduces the performance when the distributed system is underutilized. However, the results show that the balancer achieves even a super-linear speedup (speedup greater than the number of scaled resources) for a greater load. The paper discusses also many other benefits that the balancer provides.
Article
Full-text available
The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of high-and low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing.
Conference Paper
Full-text available
One of the major research challenges in this field is the unavailability of a comprehensive network based data set which can reflect modern network traffic scenarios, vast varieties of low footprint intrusions and depth structured information about the network traffic. Evaluating network intrusion detection systems research efforts, KDD98, KDDCUP99 and NSLKDD benchmark data sets were generated a decade ago. However, numerous current studies showed that for the current network threat environment, these data sets do not inclusively reflect network traffic and modern low footprint attacks. Countering the unavailability of network benchmark data set challenges, this paper examines a UNSW-NB15 data set creation. This data set has a hybrid of the real modern normal and the contemporary synthesized attack activities of the network traffic. Existing and novel methods are utilised to generate the features of the UNSWNB15 data set. This data set is available for research purposes and can be accessed from the links: 1. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7348942&filter%3DAND%28p_IS_Number%3A7348936%29 2. https://www.unsw.adfa.edu.au/australian-centre-for-cyber-security/cybersecurity/ADFA-NB15-Datasets/
Article
Full-text available
Restricted Boltzmann Machines (RBMs) are general unsupervised learning devices to ascertain generative models of data distributions. RBMs are often trained using the Contrastive Divergence learning algorithm (CD), an approximation to the gradient of the data log-likelihood. A simple reconstruction error is often used as a stopping criterion for CD, although several authors \cite{schulz-et-al-Convergence-Contrastive-Divergence-2010-NIPSw, fischer-igel-Divergence-Contrastive-Divergence-2010-ICANN} have raised doubts concerning the feasibility of this procedure. In many cases the evolution curve of the reconstruction error is monotonic while the log-likelihood is not, thus indicating that the former is not a good estimator of the optimal stopping point for learning. However, not many alternatives to the reconstruction error have been discussed in the literature. In this manuscript we investigate simple alternatives to the reconstruction error, based on the inclusion of information contained in neighboring states to the training set, as a stopping criterion for CD learning.
Conference Paper
Full-text available
Network intrusions are becoming more and more sophisticated to detect. To mitigate this issue, intrusion detection systems (IDSs) have been widely deployed in identifying a variety of attacks and collaborative intrusion detection networks (CIDNs) have been proposed which enables an IDS to collect information and learn experience from other IDSs with the purpose of improving detection accuracy. A CIDN is expected to have more power in detecting attacks such as denial-of-service (DoS) than a single IDS. In real deployment, we notice that each IDS has different levels of sensitivity in detecting different types of intrusions (i.e., based on their own signatures and settings). In this paper, we propose a machine learning-based approach to assign intrusion sensitivity based on expert knowledge and design a trust management model that allows each IDS to evaluate the trustworthiness of others by considering their detection sensitivities. In the evaluation, we explore the performance of our proposed approach under different attack scenarios. The experimental results indicate that by considering the intrusion sensitivity, our trust model can enhance the detection accuracy of malicious nodes as compared to existing similar models.
Conference Paper
Full-text available
Intrusion Detection System (IDS) have become increasingly popular over the past years as an important network security technology to detect cyber attacks in a wide variety of network communication. IDS monitors' network or host system activities by collecting network information, and analyze this information for malicious activities. Cloud computing, with the concept of Software as a Service (SaaS) presents an exciting benefit when it enables providers to rent their services to users in perform complex tasks over the Internet. In addition, Cloud based services reduce a cost in investing new infrastructure, training new personnel, or licensing new software. In this paper, we introduce a novel framework based on Cloud computing called Cloud-based Intrusion Detection Service (CBIDS). This model enables the identification of malicious activities from different points of network and overcome the deficiency of classical intrusion detection. CBIDS can be implemented to detect variety of attacks in private and public Clouds.
Conference Paper
Full-text available
The accuracy of detecting intrusions within an intrusion detection network (IDN) depends on the efficiency of collaboration between the peer intrusion detection systems (IDSes) as well as the security itself of the IDN against insider threats. In this paper, we study host-based IDNs and introduce a Dirichlet-based model to measure the level of trustworthiness among peer IDSes according to their mutual experience. The model has strong scalability properties and is robust against common insider threats, such as a compromised or malfunctioning peer. We evaluate our system based on a simulated collaborative host-based IDS network. The experimental results demonstrate the improved robustness, efficiency, and scalability of our system in detecting intrusions in comparison with existing models.
Chapter
For better classification, generative models are used to initialize the model and extract features before training a classifier. Typically, separate unsupervised and supervised learning problems are solved. Generative restricted Boltzmann machines and deep belief networks are widely used for unsupervised learning. We developed several supervised models based on deep belief networks in order to improve this two-phase strategy. Modifying the loss function to account for expectation with respect to the underlying generative model, introducing weight bounds, and multi-level programming are all applied in model development. The proposed models capture both unsupervised and supervised objectives effectively. The computational study verifies that our models perform better than the two-phase training approach. In addition, we conduct an ablation study to examine how a different part of our model and a different mix of training samples affect the performance of our models.
Article
With advances in Fog and edge computing, various problems such as data processing for large Internet of things (IoT) systems can be solved in an efficient manner. One such problem for the next generation smart grid IoT system comprising of millions of smart devices is the data aggregation problem. Traditional data aggregation schemes for smart grids incur high computation and communication costs, and in recent years there have been efforts to leverage fog computing with smart grids to overcome these limitations. In this paper, a new fog-enabled privacy-preserving data aggregation scheme (FESDA) is proposed. Unlike existing schemes, the proposed scheme is resilient to false data injection attacks by filtering out the inserted values from external attackers. To achieve privacy, a modified version of Paillier crypto-system is used to encrypt consumption data of the smart meter users. In addition, FESDA is fault-tolerant, which means, the collection of data from other devices will not be affected even if some of the smart meters malfunction. We evaluate its performance along with three other competing schemes in terms of aggregation, decryption and communication costs. The findings demonstrate that FESDA reduces the communication cost by 50%, when compared with the PPFA aggregation scheme.
Conference Paper
As adversarial techniques constantly evolve to circumvent existing security measures, an isolated, stand-alone intrusion detection system (IDS) is unlikely to be efficient or effective. Hence, there has been a trend towards developing collaborative intrusion detection networks (CIDNs), where IDS nodes collaborate and communicate with each other. Such a distributed ecosystem can achieve improved detection accuracy, particularly for detecting emerging threats in a timely fashion (before the threat becomes common knowledge). However, there are inherent limitations due to malicious insiders who can seek to compromise and poison the ecosystem. A potential mitigation strategy is to introduce a challenge-based trust mechanism, in order to identify and penalize misbehaving nodes by evaluating the satisfaction between challenges and responses. While this mechanism has been shown to be robust against common insider attacks, it may still be vulnerable to advanced insider attacks in a real-world deployment. Therefore, in this paper, we develop a collusion attack, hereafter referred to as Bayesian Poisoning Attack, which enables a malicious node to model received messages and to craft a malicious response to those messages whose aggregated appearance probability of normal requests is above the defined threshold. In the evaluation, we explore the attack performance under both simulated and real network environments. Experimental results demonstrate that the malicious nodes under our attack can successfully craft and send untruthful feedback while maintaining their trust values.
Article
To protect assets and resources from being hacked, intrusion detection systems are widely implemented in organizations around the world. However, false alarms are one challenging issue for such systems, which would significantly degrade the effectiveness of detection and greatly increase the burden of analysis. To solve this problem, building an intelligent false alarm filter using machine learning classifiers is considered as one promising solution, where an appropriate algorithm can be selected in an adaptive way in order to maintain the filtration accuracy. By means of cloud computing, the task of adaptive algorithm selection can be offloaded to the cloud, whereas it could cause communication delay and increase additional burden. In this work, motivated by the advent of edge computing, we propose a framework to improve the intelligent false alarm reduction for DIDS based on edge computing devices. Our framework can provide energy efficiency as the data can be processed at the edge for shorter response time. The evaluation results demonstrate that our framework can help reduce the workload for the central server and the delay as compared to the similar studies.
Article
Intrusion alert analysis is an attractive and active topic in the area of intrusion detection systems. In recent decades, many research communities have been working in this field. The main objective of this article is to achieve a taxonomy of research fields in intrusion alert analysis by using a systematic mapping study of 468 high-quality papers. The results show that there are 10 different research topics in the field, which can be classified into three broad groups: pre-processing, processing, and post-processing. The processing group contains most of the research works, and the post-processing group is newer than others.
Conference Paper
To construct an intelligent alarm filter is a promising solution to help reduce false alarms for an intrusion detection system (IDS), in which an appropriate algorithm can be selected in an adaptive way. Taking the advantage of cloud computing, the process of algorithm selection can be offloaded to the cloud, but it may cause communication delay and additional burden on the cloud side. This issue may become worse when it comes to distributed intrusion detection systems (DIDSs), i.e., some IoT applications might require very short response time and most of the end nodes in IoT are energy constrained things. In this paper, with the advent of edge computing, we propose a framework for improving the intelligent false alarm reduction for DIDSs based on edge computing devices (i.e., the data can be processed at the edge for shorter response time and could be more energy efficient). The evaluation shows that the proposed framework can help reduce the workload for the central server and shorten the delay as compared to the similar studies.
Article
The Internet of Things (IoT) now permeates our daily lives, providing important measurement and collection tools to inform our every decision. Millions of sensors and devices are continuously producing data and exchanging important messages via complex networks supporting machine-to-machine communications and monitoring and controlling critical smart-world infrastructures. As a strategy to mitigate the escalation in resource congestion, edge computing has emerged as a new paradigm to solve IoT and localized computing needs. Compared with the well-known cloud computing, edge computing will migrate data computation or storage to the network “edge”, near the end users. Thus, a number of computation nodes distributed across the network can offload the computational stress away from the centralized data center, and can significantly reduce the latency in message exchange. In addition, the distributed structure can balance network traffic and avoid the traffic peaks in IoT networks, reducing the transmission latency between edge/cloudlet servers and end users, as well as reducing response times for real-time IoT applications in comparison with traditional cloud services. Furthermore, by transferring computation and communication overhead from nodes with limited battery supply to nodes with significant power resources, the system can extend the lifetime of the individual nodes. In this paper, we conduct a comprehensive survey, analyzing how edge computing improves the performance of IoT networks. We categorize edge computing into different groups based on architecture, and study their performance by comparing network latency, bandwidth occupation, energy consumption, and overhead. In addition, we consider security issues in edge computing, evaluating the availability, integrity, and confidentiality of security strategies of each group, and propose a framework for security evaluation of IoT networks with edge computing. Finally, we compare the performance of various IoT applications (smart city, smart grid, smart transportation, etc.) in edge computing and traditional cloud computing architectures.
Article
With the increasing digitization of the healthcare industry, a wide range of devices (including traditionally non-networked medical devices) are Internet- and inter-connected. Mobile devices (e.g. smartphones) are one common device used in the healthcare industry to improve the quality of service and experience for both patients and healthcare workers, and the underlying network architecture to support such devices is also referred to as medical smartphone networks (MSNs). MSNs, similar to other networks, are subject to a wide range of attacks (e.g. leakage of sensitive patient information by a malicious insider). In this work, we focus on MSNs and present a compact but efficient trust-based approach using Bayesian inference to identify malicious nodes in such an environment. We then demonstrate the effectiveness of our approach in detecting malicious nodes by evaluating the deployment of our proposed approach in a real-world environment with two healthcare organizations.
Article
In cloud computing environments, resources stored on the cloud servers are transmitted in the form of data flow to the clients via networks. Due to the real-time and ubiquitous requirements of cloud computing services, how to design a sophisticated transmission model to ensure service reliability and security becomes a key problem. In this paper, we first propose a Comprehensive Transmission (CT) model, by combining the Client/Server (C/S) mode and the Peer-to-Peer (P2P) mode for reliable data transmission. Then, we design a Two-Phase Resource Sharing (TPRS) protocol, which mainly consists of a pre-filtering phase and a verification phase, to efficiently and privately achieve authorized resource sharing in the CT model. Extensive experiments have been conducted on the synthetic data set to verify the feasibility of our protocol.
Conference Paper
To enhance the performance of single intrusion detection systems (IDSs), collaborative intrusion detection networks (CIDNs) have been developed, which enable a set of IDS nodes to communicate with each other. In such a distributed network, insider attacks like collusion attacks are the main threat. In the literature, challenge-based trust mechanisms have been established to identify malicious nodes by evaluating the satisfaction between challenges and responses. However, we find that such mechanisms rely on two major assumptions, which may result in a weak threat model and make CIDNs still vulnerable to advanced insider attacks in practical deployment. In this paper, we design a novel type of collusion attack, called passive message fingerprint attack (PMFA), which can collect messages and identify normal requests in a passive way. In the evaluation, we explore the attack performance under both simulated and real network environments. Experimental results indicate that under our attack, malicious nodes can send malicious responses to normal requests while maintaining their trust values.
Conference Paper
Intrusion detection systems (IDSs) have been widely deployed in organizations nowadays as the last defense for the network security. However, one of the big problems of these systems is that a large amount of alarms especially false alarms will be produced during the detection process, which greatly aggravates the analysis workload and reduces the effectiveness of detection. To mitigate this problem, we advocate that the construction of a false alarm filter by utilizing machine learning schemes is an effective solution. In this paper, we propose an adaptive false alarm filter aiming to filter out false alarms with the best machine learning algorithm based on distinct network contexts. In particular, we first compare with six specific machine learning schemes to illustrate their unstable performance. Then, we demonstrate the architecture of our adaptive false alarm filter. The evaluation results show that our approach is effective and encouraging in real scenarios.
Conference Paper
Collaborative intrusion detection systems (IDSs) have a great potential for addressing the challenges posed by the increasing aggressiveness of current Internet attacks. However, one of the major concerns with the proposed collaborative IDSs is their vulnerability to the insider threat. Malicious intruders, infiltrating such a system, could poison the collaborative detectors with false alarms, disrupting the intrusion detection functionality and placing at risk the whole system. In this paper, we propose a P2P-based overlay for intrusion detection (overlay IDS) that addresses the insider threat by means of a trust-aware engine for correlating alerts and an adaptive scheme for managing trust. We have implemented our system using JXTA framework and we have evaluated its effectiveness for preventing the spread of a real Internet worm over an emulated network. The evaluation results show that our overlay IDS significantly increases the overall survival rate of the network
Significant permission identification for machine-learning-based android malware detection
  • J Li
  • L Sun
  • Q Yan
  • Z Li
  • W Srisa-An
  • H Ye
J. Li, L. Sun, Q. Yan, Z. Li, W. Srisa-an, and H. Ye, "Significant permission identification for machine-learning-based android malware detection," IEEE Transactions on Industrial Informatics, vol. 14, no. 7, pp. 3216-3225, 2018.
Idsaas: Intrusion detection system as a service in public clouds
  • T Alharkan
  • P Martin
T. Alharkan and P. Martin, "Idsaas: Intrusion detection system as a service in public clouds," in 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012). IEEE, 2012, pp. 686-687.
Towards adaptive false alarm reduction using cloud as a service
  • Y Meng
  • W Li
  • L.-F Kwok
Y. Meng, W. Li, and L.-F. Kwok, "Towards adaptive false alarm reduction using cloud as a service," in 2013 8th International Conference on Communications and Networking in China (CHINACOM). IEEE, 2013, pp. 420-425.