Fig 3 - uploaded by Shabib Aftab
Content may be subject to copyright.
Comparison of the FAR Results

Comparison of the FAR Results

Source publication
Article
Full-text available
Network security is an essential element in the day-today IT operations of nearly every organization in business. Securing a computer network means considering the threats and vulnerabilities and arrange the countermeasures. Network security threats are increasing rapidly and making wireless network and internet services unreliable and insecure. In...

Similar publications

Preprint
Full-text available
Intrusion detection is a crucial technology in the communication network security field. In this paper, a dynamic evolutionary sparse neural network (DESNN) is proposed for intrusion detection, named as DESNN algorithm. Firstly, an ensemble neural network model is constructed, which is processed by a dynamic pruning rule and further divided into ad...

Citations

... Ahmad Iqbal et al. [22] implemented neural networks with feedforwards and pattern identification neural networks for IDS. They used scaled conjugate gradient methods and Bayesian regularisation. ...
Preprint
Full-text available
IDS aims to protect computer networks from security threats by detecting, notifying, and taking appropriate action to prevent illegal access and protect confidential information. As the globe becomes increasingly dependent on technology and automated processes, ensuring secured systems, applications, and networks has become one of the most significant problems of this era. The global web and digital technology have significantly accelerated the evolution of the modern world, necessitating the use of telecommunications and data transfer platforms. Researchers are enhancing the effectiveness of IDS by incorporating popular datasets into machine learning algorithms. IDS, equipped with machine learning classifiers, enhances security attack detection accuracy by identifying normal or abnormal network traffic. This paper explores the methods of capturing and reviewing intrusion detection systems (IDS) and evaluates the challenges existing datasets face. A deluge of research on machine learning (ML) and deep learning (DL) architecture-based intrusion detection techniques has been conducted in the past ten years on various cybersecurity datasets, including KDDCUP'99, NSL-KDD, UNSW-NB15, CICIDS-2017, and CSE-CIC-IDS2018. We conducted a literature review and presented an in-depth analysis of various intrusion detection methods that use SVM, KNN, DT, LR, NB, RF, XGBOOST, Adaboost, and ANN. We provide an overview of each technique, explaining the role of the classifiers and algorithms used. A detailed tabular analysis highlights the datasets used, classifiers employed, attacks detected, evaluation metrics, and conclusions drawn. This article offers a thorough review for future IDS research.
... In feedback neural networks, unlike feed-forward neural networks, the output of a cell is not only given as input to the layer of the next cell. It can also be connected as input to any cell in the previous layer or in its own layer [22,23]. ...
Article
Full-text available
IT is recognized as the engine of the digital world. The fact that this technology has multiple sub-sectors makes it the driving force of the economy. With these characteristics, the sector is becoming the center of attention of investors. Considering that investors prioritize profitability, it becomes a top priority for managers to make accurate and reliable profitability forecasts. The aim of this study is to estimate the profitability of IT sector firms traded in Borsa Istanbul using machine learning methods. In this study, the financial data of 13 technology firms listed in the Borsa Istanbul Technology index and operating between March 2000 and December 2023 were used. Return on assets (ROA) and return on equity (ROE) were estimated using machine learning methods such as neural networks, multiple linear regression and decision tree regression. The results obtained reveal that the performance of artificial neural networks (ANN) and multiple linear regression (MLR) are particularly effective.
... This method uses back-propagation as an error correction algorithm based on the partial derivative of the output error function E[·] to update the weights and the thresholds of each of the previous layers in the cascade. MLF is used for OD in the field of network intrusion detection [114], anomaly detection in IoT networks [115], and in multi-sensor systems [116]. A drawback of MLF in performing OD on streaming data is the fact that it requires labeled data for the model's initial training before OD. ...
... MLF Network intrusion detection [114], intelligent transport systems [168], medicine, and customization of IoT and WSNs [115,116,169]. ...
Article
Full-text available
Streaming data are present all around us. From traditional radio systems streaming audio to today’s connected end-user devices constantly sending information or accessing services, data are flowing constantly between nodes across various networks. The demand for appropriate outlier detection (OD) methods in the fields of fault detection, special events detection, and malicious activities detection and prevention is not only persistent over time but increasing, especially with the recent developments in Telecommunication systems such as Fifth Generation (5G) networks facilitating the expansion of the Internet of Things (IoT). The process of selecting a computationally efficient OD method, adapted for a specific field and accounting for the existence of empirical data, or lack thereof, is non-trivial. This paper presents a thorough survey of OD methods, categorized by the applications they are implemented in, the basic assumptions that they use according to the characteristics of the streaming data, and a summary of the emerging challenges, such as the evolving structure and nature of the data and their dimensionality and temporality. A categorization of commonly used datasets in the context of streaming data is produced to aid data source identification for researchers in this field. Based on this, guidelines for OD method selection are defined, which consider flexibility and sample size requirements and facilitate the design of such algorithms in Telecommunications and other industries.
... Ahmad Iqbal et al. [22] implemented neural networks with feedforwards and pattern identification neural networks for IDS. They used scaled conjugate gradient methods and Bayesian regularisation. ...
Article
Full-text available
IDS aims to protect computer networks from security threats by detecting, notifying, and taking appropriate action to prevent illegal access and protect confidential information. As the globe becomes increasingly dependent on technology and automated processes, ensuring secured systems, applications, and networks has become one of the most significant problems of this era. The global web and digital technology have significantly accelerated the evolution of the modern world, necessitating the use of telecommunications and data transfer platforms. Researchers are enhancing the effectiveness of IDS by incorporating popular datasets into machine learning algorithms. IDS, equipped with machine learning classifiers, enhances security attack detection accuracy by identifying normal or abnormal network traffic. This paper explores the methods of capturing and reviewing intrusion detection systems (IDS) and evaluates the challenges existing datasets face. A deluge of research on machine learning (ML) and deep learning (DL) architecture-based intrusion detection techniques have been conducted in the past ten years on a variety of cyber security-based datasets, including KDDCUP'99, NSL-KDD, UNSW-NB15, CICIDS-2017, and CSE-CIC-IDS2018. We conducted a literature review and presented an in-depth analysis of various intrusion detection methods that use SVM, KNN, DT, LR, NB, RF, XGBOOST, Adaboost, and ANN. We have given an overview of each technique, explaining the function of the classifier mentioned above and all other algorithms used in the research. Additionally, a comprehensive analysis of each method has been provided in tabular form, emphasizing the dataset utilized, classifiers employed, assaults detected, an accurate evaluation matrix, and conclusions drawn from every technique investigated. This article provides a comprehensive overview of recent research on developing a reliable IDS using five distinct datasets for future research. This investigation was carefully analyzed and contrasted with the findings from numerous investigations.
... ML techniques are employed in various ways, including supervised learning. In binary-class supervised NIDS, the classifier is trained to learn and predict whether the flow is benign or malicious [16], [18], [20]. In contrast, in multi-class supervised IDS, the classifier is trained to predict the type of attack [21], [22], [23], [24]. ...
Article
Current flow-based Network Intrusion Detection Systems (NIDSs) have the drawback of detecting attacks only once the flow has ended, resulting in potential delays in attack detection and increasing the risk of damage due to the infiltration of a greater number of malicious packets. Moreover, the delay provides attackers with an extended period of presence within the network, enabling them to execute subsequent attacks. To overcome this drawback, this work addresses the issue of early flow classification in NIDSs that incorporates a Deep Learning (DL) model. This model leverages Recurrent Neural Networks (RNNs), including Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), coupled with attention mechanisms. This strategic combination allows the system to harness the inherent sequential nature of packets within network flows, enhancing the efficiency of early flow classification. We conducted experiments on two up-to-date network intrusion datasets, namely CIC-IDS2017 and 5G-NIDD. Our findings demonstrate the effectiveness and accuracy of the proposed NIDS in classifying network flows. Additionally,our approach showcases its efficacy by promptly identifying and detecting attacks in their early stages without the need for flow termination. This results in a reduction in both the number of initial packets required for classification and the time needed for detection.
... ANN easily learns from input data (sample) and has the power to generalize the pattern (Montesinos López et al. 2022). Feedforward neural network is best suited for pattern recognition which is more suitable for remote sensing data analysis (Iqbal & Aftab 2019;Topouzelis et al. 2009). Compared to other models, neural networks are less sensitive to noisy sample data (Sidike et al. 2018;Iqbal & Aftab 2019;Waqas et al. 2023). ...
... Feedforward neural network is best suited for pattern recognition which is more suitable for remote sensing data analysis (Iqbal & Aftab 2019;Topouzelis et al. 2009). Compared to other models, neural networks are less sensitive to noisy sample data (Sidike et al. 2018;Iqbal & Aftab 2019;Waqas et al. 2023). A generalized ANN neuron consists of one input layer, one hidden layer and one output layer. ...
Article
Full-text available
Accurate land use and land cover (LULC) is crucial for sustainable urban planning and for many scientific researches. However, the demand for accurate LULC maps is increasing; it is required to compare the classification algorithms to choose the best one. Though, machine and deep learning algorithms are widely used across the world their application is limited in Bangladesh. Accurate urban LULC mapping is challenging because urban heterogeneity affects image classification models in specific feature extraction. In this research, the accuracy of machine learning algorithms (MLA) of RF (Random Forest), SVM (Support Vector Machine), deep learning algorithm (DLA) of ANN (Artificial Neural Network) and traditional Maximum Likelihood (MaxL) method was compared in LULC classification of Dhaka city. Model accuracy of MLA and DLA was tested by statistical indices of sensitivity, specificity, precision, recall F1 etc. There is a high correlation between SVM and ANN models were found. The overall accuracy of the maps was 0.93, 0.94, 0.91 and 0.95 and kappa was 0.89, 0.91, 0.86 and 0.93 for the MaxL, RF, SVM and ANN models respectively. The user accuracy and producer accuracy largely varied according to LULC classes in the applied models. The lowest accuracy of the models was found for bare land classification followed by built-up and vegetation. The high mixture of LULC classes affects the accuracy of built up and bare land classification which produces the lowest accuracy in the MaxL model. The findings indicate that the most accurate and reliable model for urban LULC classification was the ANN model.
... By analyzing patterns, relationships, and factors that have contributed to past defects, these techniques can identify areas of code that are more likely to develop defects in future projects. Researchers have proposed some machine learning techniques [5][6][7][8][9][10][11][12][13][85][86][87][88][89] to improve the software engineering process. This shift towards predictive defect management is expected to not only improve software quality but also optimize resource allocation through more rigorous testing and review of defect-prone parts [92,93]. ...
Article
Full-text available
Software defect prediction is a crucial area of study focused on enhancing software quality and cutting down on software upkeep expenses. Cross Project Defect Prediction (CPDP) is a method meant to use information from different source projects to spot software issues in a specific project. CPDP comes in handy when the project being analyzed lacks enough or any data about defects for creating a dependable defect prediction model. Machine learning that is a part of artificial intelligence learns from data and then makes forecasts or choices. Machine learning (ML) is a key component of CPDP because it can learn from heterogeneous and imbalanced data sources. However, there are many challenges and open issues in applying machine learning to CPDP, such as data selection, feature extraction, model selection, evaluation metrics, and transfer learning. In this study, we provide a complete review of existing literature from 2018 to 2023 on Defect Prediction using Machine Learning, covering the main methods, applications, and limitations. We also use ML to identify current research gaps and future directions for CPDP. This paper will serve as a useful reference for researchers interested in using ML for CPDP.
... Iqbal and Aftab (2019) use Feed-Forward and Pattern Recognition Neural Network algorithms to develop a NIDS solution. The researchers designed, implemented, and tested the developed system using the KDD CUP99 dataset with slight modifications to enhance experimental specificity [13]. They used Scaled Conjugate Gradient and Bayesian Regularization training functions to train the feedforward artificial neural networks. ...
... They used Scaled Conjugate Gradient and Bayesian Regularization training functions to train the feedforward artificial neural networks. The performance metrics used to evaluate the neural network models are the accuracy, Mathew's Correlation Coefficient, R-squared, Mean Square Error, False Alarm Rate (FAR), Detection Rate (DR), and Area under Receiver operating characteristic (ROC) curve [13]. These models demonstrate substantial performance measures when subjected to different attack vectors. ...
Article
Full-text available
The growing threat of advanced security attacks targeting enterprise information systems raises the need for novel security solutions that promptly identify and respond to these issues. These security strategies must automate threat detection and response in enterprise settings, enabling organizations to address emerging threats, ongoing attacks, and imminent risks adequately. Traditional security strategies that rely on rule-based approaches for intrusion detection systems are inefficient in achieving these objectives due to their limited capabilities in identifying new threats. As a result, machine learning strategies have been proposed to address these needs, offering an intelligent detection environment for novel threats. Classification algorithms such as random forest, gradient boosting and deep learning techniques like deep neural networks have been proposed in various studies. This paper examines the performance of these models, providing a comparative review of their detection capabilities based on precision, recall, accuracy, specificity, and sensitivity. The models are tested using a Python environment due to the extensive machine learning capabilities. These tests show that random forest is the ideal model for network-based intrusion detection systems.
... Computer networks play an important part in today's society, but they are subject to invasions. As a result, the best available ways to defend the systems are required [1]. Any action performed to compromise the system's integrity, confidentiality, or availability is referred to as an intrusion. ...
... For feature selection we use binary tree growth algorithm (TGA) [1]. Binary TGA is explained in following section. ...
Article
Full-text available
With the rapid advancement of networking technologies, security system has become increasingly important to academics from several sectors. Intrusion detection (ID) provides a valuable protection by reducing the human resources required to keep an eye on intruders, improving the efficiency of detecting the various attacks in networks. Machine learning and deep learning are two key areas that have recently received a lot of attention, with a focus on improving the precision of detection classifiers. Using defense anvance research project agency (DARPA”98) datasets, a number of academics and research have developed intrusion detection systems. This paper discusses various approaches developed by different researchers, including scale-hybrid-IDS-AlertNet (SHIA), forward feature selection algorithm (FFSA), modified- mutual information feature selection (MMIFS), deep neural network (DNN), and the holes that remain to be filled, highlighting areas where these procedures can be improved, also are addressed and the proposed approach improved deep convolutional neural network (IDCNN) is compared with existing approach.
... Deep packet inspection may include significant similarities to complicated signatures of attack rules [24,25]. Pattern matching is a time-consuming procedure that requires substantially more computing power than a firewall, which might cause a NIDS to become overloaded [26]. When a NIDS becomes overburdened and begins dropping or ignoring packet content, network security may be compromised. ...