An example of the adversarial effect in machine learning

An example of the adversarial effect in machine learning

Source publication
Article
Full-text available
The convenience of availing quality services at affordable costs anytime and anywhere makes mobile technology very popular among users. Due to this popularity, there has been a huge rise in mobile data volume, applications, types of services, and number of customers. Furthermore, due to the COVID‐19 pandemic, the worldwide lockdown has added fuel t...

Similar publications

Conference Paper
Full-text available
In order to integrate 5G mobile radio into Time-Sensitive Networking (TSN), the 3rd Generation Partnership Project (3GPP) specified the model of a virtual 5G-TSN bridge. This contains TSN translators which map principles such as time synchronization and Quality of Service (QoS) mechanisms from TSN to 5G. However, practical implementations are not a...

Citations

... It also addresses DSS scenarios providing useful discussion on spectrum allocation and spectrum access. However, the authors did not survey ML papers on these topics, which are covered in [34]. In this work, the authors provide an overview of ML techniques focusing on address 5G network issues such as resource allocation, spectrum access and security aspects. ...
... [33] A survey on ML algorithms in the CSS and DSS domain for CRNs. [34] A deep learning discussion to tackle 5G and beyond wireless systems issues. [35] A survey on spectrum sharing for CR towards 5G networks, including a taxonomy from the perspective of Wider-Coverage, Massive-Capacity, Massive-Connectivity, and Low-Latency. ...
Preprint
Full-text available
The 5th generation (5G) of wireless systems is being deployed with the aim to provide many sets of wireless communication services, such as low data rates for a massive amount of devices, broadband, low latency, and industrial wireless access. Such an aim is even more complex in the next generation wireless systems (6G) where wireless connectivity is expected to serve any connected intelligent unit, such as software robots and humans interacting in the metaverse, autonomous vehicles, drones, trains, or smart sensors monitoring cities, buildings, and the environment. Because of the wireless devices will be orders of magnitude denser than in 5G cellular systems, and because of their complex quality of service requirements, the access to the wireless spectrum will have to be appropriately shared to avoid congestion, poor quality of service, or unsatisfactory communication delays. Spectrum sharing methods have been the objective of intense study through model-based approaches, such as optimization or game theories. However, these methods may fail when facing the complexity of the communication environments in 5G, 6G, and beyond. Recently, there has been significant interest in the application and development of data-driven methods, namely machine learning methods, to handle the complex operation of spectrum sharing. In this survey, we provide a complete overview of the state-of-theart of machine learning for spectrum sharing. First, we map the most prominent methods that we encounter in spectrum sharing. Then, we show how these machine learning methods are applied to the numerous dimensions and sub-problems of spectrum sharing, such as spectrum sensing, spectrum allocation, spectrum access, and spectrum handoff. We also highlight several open questions and future trends.
... It also addresses DSS scenarios providing useful discussion on spectrum allocation and spectrum access. However, the authors did not survey ML papers on these topics, which are covered in [34]. In this work, the authors provide an overview of ML techniques focusing on address 5G network issues such as resource allocation, spectrum access and security aspects. ...
... [33] A survey on ML algorithms in the CSS and DSS domain for CRNs. [34] A deep learning discussion to tackle 5G and beyond wireless systems issues. [35] A survey on spectrum sharing for CR towards 5G networks, including a taxonomy from the perspective of Wider-Coverage, Massive-Capacity, Massive-Connectivity, and Low-Latency. ...
Article
Full-text available
The 5th generation (5G) of wireless systems is being deployed with the aim to provide many sets of wireless communication services, such as low data rates for a massive amount of devices, broadband, low latency, and industrial wireless access. Such an aim is even more complex in the next generation wireless systems (6G) where wireless connectivity is expected to serve any connected intelligent unit, such as software robots and humans interacting in the metaverse, autonomous vehicles, drones, trains, or smart sensors monitoring cities, buildings, and the environment. Because of the wireless devices will be orders of magnitude denser than in 5G cellular systems, and because of their complex quality of service requirements, the access to the wireless spectrum will have to be appropriately shared to avoid congestion, poor quality of service, or unsatisfactory communication delays. Spectrum sharing methods have been the objective of intense study through model-based approaches, such as optimization or game theories. However, these methods may fail when facing the complexity of the communication environments in 5G, 6G, and beyond. Recently, there has been significant interest in the application and development of data-driven methods, namely machine learning methods, to handle the complex operation of spectrum sharing. In this survey, we provide a complete overview of the state-of-theart of machine learning for spectrum sharing. First, we map the most prominent methods that we encounter in spectrum sharing. Then, we show how these machine learning methods are applied to the numerous dimensions and sub-problems of spectrum sharing, such as spectrum sensing, spectrum allocation, spectrum access, and spectrum handoff. We also highlight several open questions and future trends.
... The flatten mechanism is utilised to transform the yield of max-pooling operations into a 1D vector that passes into ReLU dense layer. The final layer of the approach is the softmax dense layer that produces the output, as demonstrated in Equation (12). ...
Article
Full-text available
Industrial Internet of Things (IIoT) is a pervasive network of interlinked smart devices that provide a variety of intelligent computing services in industrial environments. Several IIoT nodes operate confidential data (such as medical, transportation, military, etc.) which are reachable targets for hostile intruders due to their openness and varied structure. Intrusion Detection Systems (IDS) based on Machine Learning (ML) and Deep Learning (DL) techniques have got significant attention. However, existing ML and DL‐based IDS still face a number of obstacles that must be overcome. For instance, the existing DL approaches necessitate a substantial quantity of data for effective performance, which is not feasible to run on low‐power and low‐memory devices. Imbalanced and fewer data potentially lead to low performance on existing IDS. This paper proposes a self‐attention convolutional neural network (SACNN) architecture for the detection of malicious activity in IIoT networks and an appropriate feature extraction method to extract the most significant features. The proposed architecture has a self‐attention layer to calculate the input attention and convolutional neural network (CNN) layers to process the assigned attention features for prediction. The performance evaluation of the proposed SACNN architecture has been done with the Edge‐IIoTset and X‐IIoTID datasets. These datasets encompassed the behaviours of contemporary IIoT communication protocols, the operations of state‐of‐the‐art devices, various attack types, and diverse attack scenarios.
... The proposed Deep6GTree version merges the predictive strength of Deep Neural Networks (DNNs) with the interpretability of a decision tree to forge a groundbreaking technique for 6G network optimisation and decision-making [16]. This hybrid approach harnesses DNNs to analyse and learn from the complex, excessive dimensional data feature of 6G environments, efficiently capturing the nuanced patterns and dynamics that define network behaviour and user interactions [17]. By way of integrating this with a decision tree, Deep6GTree effectively leverages deep learning knowledge to process considerable datasets but moreover guarantees that the consequences of such analyses are understandable and actionable. ...
Article
Full-text available
The quick development of 6G wi-fi technology guarantees exceptional data speeds and connectivity. However, it additionally poses big demanding situations in handling networks and protection. The proposed "Deep6GTree" approach aims to deal with these difficulties with the help of combining Deep Neural Networks (DNNs) with decision tree algorithms, developing a collaborative version that uses the complexity handling capability of DNNs and the understandability of decision trees. This progressive method permits progressed data-driven decision-making for 6G networks by efficaciously processing big data to expect network needs, emerge as aware of possible protection risks, and ensure a smooth connection. By applying Deep Neural Decision Trees, the study demonstrates how advanced deep learning knowledge can be used to find critical patterns from high dimensional 6G records; at the same time, decision trees contribute to creating these actionable insights through clear decision paths. This combination is beneficial in predictions and associated with managing networks and online protection. Moreover, it gives a structure for clear and understandable AI-driven decision-making in next-generation wireless networks. The study underscores the significance of modern AI techniques in overcoming the complexities of 6G technology, offering a scalable and useful solution for telecom agencies and stakeholders. By using the benefits of DNNs and decision trees, Deep6GTree establishes a benchmark for AI in telecom, offering a plan for upcoming studies and improvement in (R&D) and beyond. The study findings have effects on the design, implementation, and safety of 6G networks, highlighting the capability of integrated AI methods in addressing the technologically demanding situations of the upcoming future.
... Next-generation networks [1], represented by the Cloud platform, offer users the choice of hosting their services with improved performance and reduced costs through APIs [2], significantly alleviating the operational and maintenance burden for tenants based on Shared Responsibility Models (SRM) [3]. REST architecture [4] is a commonly used and probably the most popular specification for web APIs, especially for cloud APIs. ...
Article
Full-text available
The API used to access cloud services typically follows the Representational State Transfer (REST) architecture style. RESTful architecture, as a commonly used Application Programming Interface (API) architecture paradigm, not only brings convenience to platforms and tenants, but also brings logical security challenges. Security issues such as quota bypass and privilege escalation are closely related to the design and implementation of API logic. Traditional code level testing methods are difficult to construct a testing model for API logic and test samples for in-depth testing of API logic, making it difficult to detect such logical vulnerabilities. We propose RESTlogic for this purpose. Firstly, we construct a test group based on the tree structure of the REST API, adapt a logic vulnerability testing model, and use feedback based methods to detect code document inconsistency defects. Secondly, based on an abstract logical testing model and resource lifecycle information, generate test cases and complete parameters, and alleviate inconsistency issues through parameter inference. Once again, we propose a method of analyzing test results using joint state codes and call stack information, which compensates for the shortcomings of traditional analysis methods. We will apply our method to testing REST services, including OpenStack, an open source cloud operating platform for experimental evaluation. We have found a series of inconsistencies, known vulnerabilities, and new unknown logical defects.
... Traditional machine learning is widely used in resource allocation management in IoT networks [16,17], including resource scheduling and traffic classification. For instance, Junaid et al. [1] proposed a resource-efficient clustering framework for social IoT applications that performs geographic text clustering hierarchically without significantly reducing clustering quality. ...
Article
Full-text available
The development of emerging information technologies, such as the Internet of Things (IoT), edge computing, and blockchain, has triggered a significant increase in IoT application services and data volume. Ensuring satisfactory service quality for diverse IoT application services based on limited network resources has become an urgent issue. Generalized processor sharing (GPS), functioning as a central resource scheduling mechanism guiding differentiated services, stands as a key technology for implementing on-demand resource allocation. The performance prediction of GPS is a crucial step that aims to capture the actual allocated resources using various queue metrics. Some methods (mainly analytical methods) have attempted to establish upper and lower bounds or approximate solutions. Recently, artificial intelligence (AI) methods, such as deep learning, have been designed to assess performance under self-similar traffic. However, the proposed methods in the literature have been developed for specific traffic scenarios with predefined constraints, thus limiting their real-world applicability. Furthermore, the absence of a benchmark in the literature leads to an unfair performance prediction comparison. To address the drawbacks in the literature, an AI-enabled performance benchmark with comprehensive traffic-oriented experiments showcasing the performance of existing methods is presented. Specifically, three types of methods are employed: traditional approximate analytical methods, traditional machine learning-based methods, and deep learning-based methods. Following that, various traffic flows with different settings are collected, and intricate experimental analyses at both the feature and method levels under different traffic conditions are conducted. Finally, insights from the experimental analysis that may be beneficial for the future performance prediction of GPS are derived.
... The third generation of techniques [2] utilize partial direct TM measurements in addition to the SNMP link counts as input for the TM estimation, unlike the first and second-generation techniques, which solely use SNMP link counts. Artificial intelligence [3][4][5], machine learning [6][7][8], and deep learning [9][10][11][12] techniques, which have gained prominence in other domains [13][14][15], have also been employed for obtaining an estimate of traffic matrices [16]. Principal Component Analysis (PCA) has been proposed by Soule et al. [2] for TM estimation. ...
... Deep reinforcement learning combines deep learning, a subfield of artificial intelligence, with reinforcement learning, a learning paradigm based on trial and error [39]. Next-generation wireless networks, such as 5 G and beyond, present numerous challenges due to the increasing demand for higher data rates, lower latency, improved reliability, and better resource management [40,41]. The model is trained on a large dataset of complete images, learning to capture the distribution and patterns present in the data [42,43]. ...
... Most research is currently focused on protocols and encryption [33][34][35][36], as well as the quality of the transmitted voice traffic [37][38][39][40]. This does not eliminate the main threats as the attackers are currently focused on something else [41]. The availability of Asterisk from the Internet is one of the main threats to network security. ...
Article
Full-text available
The research problem described in this article is related to the security of an IP network that is set up between two cities using hosting. The network is used for transmitting telephone traffic between servers located in Germany and the Netherlands. The concern is that with the increasing adoption of IP telephony worldwide, the network might be vulnerable to hacking and unauthorized access, posing a threat to the privacy and security of the transmitted information. This article proposes a solution to address the security concerns of the IP network. After conducting an experiment and establishing a connection between the two servers using the WireShark sniffer, a dump of real traffic between the servers was obtained. Upon analysis, a vulnerability in the network was identified, which could potentially be exploited by malicious actors. To enhance the security of the network, this article suggests the implementation of the Transport Layer Security (TLS) protocol. TLS is a cryptographic protocol that provides secure communication over a computer network, ensuring data confidentiality and integrity during transmission. Integrating TLS into the network infrastructure, will protect the telephone traffic and prevent unauthorized access and eavesdropping.
... Machine Learning (ML) techniques have captured considerable attention and adoption in diverse fields for their ability to extract valuable insights and make accurate predictions from complex datasets [7][8][9]. In particular, the application of ML in predicting and modeling chemical and physical processes has proven to be invaluable in areas such as environmental monitoring, industrial processes, energy, healthcare, etc. [10][11][12][13]. ...
Article
Please cite this article as: A. Ibrahim Almohana, Z. Ali Bu sinnah, T. J. Al-Musawi, Combination of CFD and machine learning for improving simulation accuracy in water purification process via porous membranes, Journal of Molecular Liquids (2023), doi: https://doi. Research highlights:  Machine learning modeling of membrane based molecular separation  Investigations on the performance of multiple models  Combination of computational fluid dynamics and machine learning for modeling  AdaBoost KNN achieves the highest performance among the three models Abstract Membrane system for molecular separation was studied in this work using combined modeling approach. Computational fluid dynamics (CFD) was conducted and integrated to machine learning models for description of ozonation process in membrane contactors. For the machine learning modeling, we investigated the performance of boosted models, specifically AdaBoost KNN, AdaBoost DT, and AdaBoost ARD, for predicting the concentration (C) of ozone using the input variables, i.e., r and z. The hyper-parameter optimization is done using Successive Halving in this study. The results reveal that AdaBoost KNN achieves the highest performance among the three models, with an impressive R 2 score of 0.9992. This indicates an excellent fit of the model to the data, implying that approximately 99.92% of the variance in the concentration can be explained by the input variables r and z. Moreover, AdaBoost KNN demonstrates a low RMSE of 1.5695E-02, indicating its ability to provide accurate predictions with small deviations from the actual values. The maximum error of 1.02733E-01 further confirms the model's robustness, as it represents the largest deviation between predicted and CFD values, which is relatively small. 2