ArticlePDF Available

Abstract

Recent years have seen rapid deployment of mobile computing and Internet of Things (IoT) networks, which can be mostly attributed to the increasing communication and sensing capabilities of wireless systems. Big data analysis, pervasive computing , and eventually artificial intelligence (AI) are envisaged to be deployed on top of IoT and create a new world featured by data-driven AI. In this context, a novel paradigm of merging AI and wireless communications, called Wireless AI that pushes AI frontiers to the network edge, is widely regarded as a key enabler for future intelligent network evolution. To this end, we present a comprehensive survey of the latest studies in wireless AI from the data-driven perspective. Specifically, we first propose a novel Wireless AI architecture that covers five key data-driven AI themes in wireless networks, including Sensing AI, Network Device AI, Access AI, User Device AI and Data-provenance AI. Then, for each data-driven AI theme, we present an overview on the use of AI approaches to solve the emerging data-related problems and show how AI can empower wireless network functionalities. Particularly, compared to the other related survey papers, we provide an in-depth discussion on the Wireless AI applications in various data-driven domains wherein AI proves extremely useful for wireless network design and optimization. Finally, research challenges and future visions are also discussed to spur further research in this promising area.
A preview of the PDF is not available
... Each edge server maintains the decentralized edge blockchain, which records transactions between edge servers and IoT devices, as well as between IoT devices themselves. These transactions must be visible, uniform, and consistent [38]. ...
Conference Paper
Blockchain is the technology on which cryptocurrencies operate within the 6G Internet of Things. Due to its flexibility, it has received significant adoption in numerous applications as a possible distributed data management solution. Nonetheless, a significant obstacle that impedes blockchain's potential is its constrained scalability, which limits its capacity to facilitate applications requiring frequent transactions. Conversely, edge computing, an innovative technology, addresses the limitations of conventional cloud-based systems by decentralizing cloud functionalities and alleviating problems such as elevated latency and insufficient security. Nonetheless, edge computing presently encounters challenges related to decentralized governance and security. As a result, the integration of blockchain and edge computing within a cohesive framework offers substantial benefits, such as dependable network access and governance, decentralized storage, and computing at the peripheries. This integration effectively mitigates many security problems and guarantees data integrity in edge computing contexts. This article offers a comprehensive analysis of blockchain technology, its attributes, and its relevance in the context of 6G. Additionally, it investigates the amalgamation of blockchain and edge computing, analyzes diverse application cases, and addresses the related issues. Finally, the report delineates contemporary research trajectories and draws conclusions based on the findings.
... On the other hand, it is crucial for next-generation wireless networks to incorporate the intelligence required to dynamically reconfigure the network resources. By harnessing the power of artificial intelligence (AI), these networks can significantly improve their operational efficiency and adaptability [58]. The ability of the nodes to learn from the past data supports the continuous refinement of their decision-making processes, ultimately resulting in more efficient network management. ...
Article
Full-text available
Physical layer security (PLS) has emerged as an innovative security measure for wireless networks, augmenting the prevailing cryptography-based methods. The concept of secrecy energy efficiency (SEE) efficiently tackles the challenge of integrating energy-efficient and secure communications. The combination of Non-orthogonal multiple access (NOMA) and cognitive radio (CR) has become a reliable solution for improving spectrum efficiency in wireless networks. This paper aims to analyze the SEE of the secondary network in NOMA-enabled underlay CR networks (NOMA-UCRNs), considering multiple non-colluding eavesdroppers. Firstly, analytical models are formulated to evaluate the SEE and secrecy sum rate (SSR) of the secondary network in NOMA-UCRN, considering residual hardware impairments, imperfect successive interference cancellation conditions, and interference power constraints at the primary receiver. Subsequently, joint optimal transmit power allocation (JOTPA) is ascertained for the secondary users (SUs) at the secondary transmitter and the secondary relay, with the aim of maximizing the SEE while satisfying constraints on tolerable interference power at the primary receiver and minimum data rates for the SUs. The computation of the JOTPA for the SUs is enabled by an iterative algorithm that utilizes the Dinkelbach method. It is demonstrated that the suggested JOTPA scheme significantly enhances the SEE of the secondary network in comparison to the random transmit power allocation and equal transmit power allocation strategies. Finally, a deep neural network (DNN) framework is developed as an innovative approach to accurately and quickly predict the JOTPA values that meet the desired objectives.
... By understanding how humans process information and make decisions, cognitive science enables the creation of AI agents that can navigate complex environments, adapt to changing conditions, and interact intelligently with both UAVs and ground-based entities. As a result, AI algorithms have the ability to handle and analyze enormous quantities of data instantaneously, enabling split-second decisions that optimize network performance and elevate overall system efficiency [9]. By harnessing AI, these networks can adapt swiftly to changing conditions, anticipate future demands, and self-optimize for peak performance. ...
... Until 2020 AI had mainly a supporting role in medicine, aiding doctors with tasks considered simple like reading radiology images or triage of emergencies in hospitals. The enhancement of hardware and the increase in the amount of available data for training machine learning algorithms in the medical field opened many new doors for AI [1,2]. The COVID-19 pandemic accelerated AI integration in medicine, making it one of the most advanced fields. ...
Article
Full-text available
Artificial Intelligence (AI) has revolutionized various industries, and its integration into medical research is no exception. AI has become instrumental in fields like drug discovery, diagnostics, medical imaging, and predictive analytics, accelerating the pace of scientific breakthroughs. This paper examines the diverse applications of AI in medical research, highlighting its contributions to drug development, patient monitoring, and the analysis of complex biological data. Despite its potential, AI faces significant challenges, including the availability of high-quality datasets, the interpretability of AI models, and ethical concerns regarding data privacy and equity. The future of AI in medical research offers vast opportunities, but careful consideration is required to address these challenges and fully unlock its potential for improving healthcare outcomes.
Article
Full-text available
Edge computing has emerged as a transformative data processing method by decentralizing computations and bringing them toward the data source, significantly reducing latency and enhancing response times. However, this shift introduces unique security challenges, especially within the detection and prevention of cyberattacks. This paper gives a comprehensive evaluation of the edge security landscape in peripheral computing, with specialized expertise in identifying and mitigating various types of attacks. We explore the challenges associated with detecting and preventing attacks in edge computing environments, acknowledging the limitations of existing approaches. One of the very interesting novelties that we include in this survey article is, that we designed a Web application that runs on an edge network and simulates SQL injection attacks-a common threat in edge computing. Through this simulation, we examined every one of the cleanup strategies used to discover and prevent such attacks using input sanitization techniques, ensuring that the malicious SQL code turned neutralized. Our studies contribute to deeper know-how of the security landscape in edge computing by providing meaningful insights into the effectiveness of multiple prevention strategies.
Article
Full-text available
Traffic prediction represents one of the crucial tasks for smartly optimizing the mobile network. Recently, Artificial Intelligence (AI) has attracted attention to solve this problem thanks to its ability in cognizing the state of the mobile network and make intelligent decisions. Research on this topic has concentrated on making predictions in a centralized fashion, i.e., by collecting data from the different network elements and process them in a cloud center. This translates into inefficiencies due to the large amount of data transmissions and computations required, leading to high energy consumption. In this work, we investigate a fully decentralized AI solution for mobile traffic prediction that allows data to be kept locally, reducing energy consumption through collaboration among the base station sites. To do so, we propose a novel prediction framework based on edge computing and Deep Transfer Learning (DTL) techniques, using datasets obtained at the edge through a large measurement campaign. Two main Deep Learning architectures are designed based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) and tested under different training conditions. hlSimulation results show that the CNN architectures outperform the RNNs in accuracy and consume less energy. In both scenarios, DTL contributes to an accuracy enhancement in 85% of the examined cases compared to their stand-alone counterparts. Additionally, DTL significantly reduces computational complexity and energy consumption during training, resulting in a reduction of the energy footprint by 60% for CNNs and 90% for RNNs. Finally, two cutting-edge eXplainable Artificial Intelligence techniques are employed to interpret the derived learning models.
Chapter
Cyber physical systems are already transforming different fields, including smart communities and energy systems. These technologies enable CPS to process large volumes of data and come up with insights that enhance processes of taking preventive actions for situations that require quick responses. In addition, we explain how both 5G and edge computing are set to disrupt data handling and transmission as well as outline how both concepts will fit well together in a resource limited environment. Points regarding data quality issues, system architecture, and security threats; resource capacity and the degree of persistence, respectively, are included. For these challenges, we offer solutions like Data management, Modularity or System Decomposition, Security in Layers, and Light-Weight Processing for Improved System Resilience. Lastly, the chapter discusses the current and potential advances in real-time decision making for CPS and the need for CPSs to be interoperable with other CPS.
Article
Full-text available
Applications for WSN are numerous and include medical status monitoring, military sensing and tracking, and traffic flow monitoring, where sensory devices are frequently moving between different sites. An appropriate encryption key protocol is needed to secure data and connections. The energy, memory, transmission range, communication, and processing capacity of the sensor nodes are severely limited. Energy is considered to be a significant influence in extending the network's lifetime when compared to all of these other resources. Key management is one tactic for preserving these networks' security. For secure communication in dynamic WSNs with node mobility, this article suggests an Enhanced Secure Key Management (EnSKM) Framework. The proposed network model is a WSN that is heterogeneous and hierarchical and consists of a base station, cluster-head nodes, and several mobile sensor nodes. The Improved K-Means Algorithm is used to cluster the mobile sensor nodes. A CH is in charge of each cluster and has the ability to broadcast messages to other clusters as well as to all of the sensors inside the clusters. The best node is chosen as the CH using a novel metaheuristic optimization approach called Red Panda Optimization. The suggested system uses a hybrid fuzzy TOPSIS decision-making approach for both creation of path keys and addition of additional nodes to the network. It improved the efficiency of the proposed system and since it reduces energy consumption, it also elongates the network lifetime. The proposed Selfish Herd Optimization combined with ElGamal (SH-mEEG) encryption technique and modified elliptic curve cryptography improves network security. By eliminating cryptanalyzed nodes, the network's security is improved and its resilience to cryptanalysis attacks is increased. The suggested method is implemented on NS2 platform and contrasted with alternative key management methods. The proposed model achieves lowest encryption time of 2.654 s, decryption time of 2.773 s, and highest security level with 98.372% when compared to existing approaches. These results demonstrate how the proposed EnSKM framework can improve security and efficiency in dynamic WSN setups.
Article
Full-text available
The evolution of network technologies has significantly transformed global communication, information sharing, and connectivity. Traditional networks, relying on static configurations and manual interventions, face substantial challenges such as complex management, inefficiency, and susceptibility to human error. The rise of artificial intelligence (AI) has begun to address these issues by automating tasks like network configuration, traffic optimization, and security enhancements. Despite their potential, integrating AI models in network engineering encounters practical obstacles including complex configurations, heterogeneous infrastructure, unstructured data, and dynamic environments. Generative AI, particularly large language models (LLMs), represents a promising advancement in AI, with capabilities extending to natural language processing tasks like translation, summarization, and sentiment analysis. This paper aims to provide a comprehensive review exploring the transformative role of LLMs in modern network engineering. In particular, it addresses gaps in the existing literature by focusing on LLM applications in network design and planning, implementation, analytics, and management. It also discusses current research efforts, challenges, and future opportunities, aiming to provide a comprehensive guide for networking professionals and researchers. The main goal is to facilitate the adoption and advancement of AI and LLMs in networking, promoting more efficient, resilient, and intelligent network systems.
Article
Full-text available
For current and future Internet of Things (IoT) networks, mobile edge-cloud computation offloading (MECCO) has been regarded as a promising means to support delay-sensitive IoT applications. However, offloading mobile tasks to the cloud gives rise to new security issues due to malicious mobile devices (MDs). How to implement offloading to alleviate computation burdens at MDs while guaranteeing high security in mobile edge cloud is a challenging problem. In this paper, we investigate simultaneously the security and computation offloading problems in a multiuser MECCO system with blockchain. First, to improve the offloading security, we propose a trustworthy access control mechanism using blockchain, which can protect cloud resources against illegal offloading behaviours. Then, to tackle the computation management of the authorized MDs, we formulate a computation offloading problem by jointly optimizing the offloading decisions, the allocation of computing resource and radio bandwidth, and smart contract usage. This optimization problem aims to minimize the long-term system costs of latency, energy consumption and smart contract fee among all MDs. To solve the proposed offloading problem, we develop an advanced deep reinforcement learning algorithm using a double-dueling Q-network. Evaluation results from real experiments and numerical simulations demonstrate the significant advantages of our scheme over the existing approaches.
Article
Full-text available
The blockchain technology is taking the world by storm. Blockchain with its decentralized, transparent and secure nature has emerged as a disruptive technology for the next generation of numerous industrial applications. One of them is Cloud of Things enabled by the combination of cloud computing and Internet of Things. In this context, blockchain provides innovative solutions to address challenges in Cloud of Things in terms of decentralization, data privacy and network security, while Cloud of Things offer elasticity and scalability functionalities to improve the efficiency of blockchain operations. Therefore, a novel paradigm of blockchain and Cloud of Things integration, called BCoT, has been widely regarded as a promising enabler for a wide range of application scenarios. In this paper, we present a state-of-the-art review on the BCoT integration to provide general readers with an overview of the BCoT in various aspects, including background knowledge, motivation, and integrated architecture. Particularly, we also provide an in-depth survey of BCoT applications in different use-case domains such as smart healthcare, smart city, smart transportation and smart industry. Then, we review the recent BCoT developments with the emerging blockchain and cloud platforms, services, and research projects. Finally, some important research challenges and future directions are highlighted to spur further research in this promising area.
Article
Multi-access edge computing (MEC) as an emerging and promising paradigm can alleviate the physical resource bottlenecks for smart mobile devices, involving storage and processing capacities. In the MEC system, the traffic load and the quality of service (QoS) can be improved through service caching. However, due to the highly coupled relationship between service caching and offloading decisions, it is extremely challenging to flexibly configure edge service cache within limited edge storage capacity to improve system performance. In this paper, we aim to minimize the average task execution time in the edge system by considering the heterogeneity of task requests, the pre-storage of the application data, and the cooperation of the base stations. Firstly, we formulate the problem of joint computation offloading, service caching, and resource allocation as a Mixed Integer Non-Linear Programming (MINLP) problem, which is difficult to solve because of the coupling relationship between optimization variables. Then we solve the MINLP problem by the decomposition theory based on Generalized Benders Decomposition. Moreover, we develop an efficient algorithm of cooperative service caching and computation offloading, called GenCOSCO, to improve QoS while reducing computation complexity. In particular, for special cases when the service cache configuration is fixed, the FixSC algorithm is proposed to derive the offloading strategy by cache replacement. Finally, numerous simulation experiments indicate that our proposed scheme can significantly reduce the average time consumption of task execution.
Chapter
The growing use of IoT devices in organizations has increased the number of attack vectors available to attackers due to the less secure nature of the devices. The widely adopted bring your own device (BYOD) policy which allows an employee to bring any IoT device into the workplace and attach it to an organization’s network also increases the risk of attacks. In order to address this threat, organizations often implement security policies in which only the connection of white-listed IoT devices is permitted. To monitor adherence to such policies and protect their networks, organizations must be able to identify the IoT devices connected to their networks and, more specifically, to identify connected IoT devices that are not on the white-list (unknown devices). In this study, we applied deep learning on network traffic to automatically identify IoT devices connected to the network. In contrast to previous work, our approach does not require that complex feature engineering be applied on the network traffic, since we represent the “communication behavior” of IoT devices using small images built from the IoT devices’ network traffic payloads. In our experiments, we trained a multiclass classifier on a publicly available dataset, successfully identifying 10 different IoT devices and the traffic of smartphones and computers, with over 99% accuracy. We also trained multiclass classifiers to detect unauthorized IoT devices connected to the network, achieving over 99% overall average detection accuracy.
Article
Blockchain technology with its secure, transparent and decentralized nature has been recently employed in many mobile applications. However, the process of executing extensive tasks such as computation-intensive data applications and blockchain mining requires high computational and storage capability of mobile devices, which would hinder blockchain applications in mobile systems. To meet this challenge, we propose a mobile edge computing (MEC) based blockchain network where multi-mobile users (MUs) act as miners to offload their data processing tasks and mining tasks to a nearby MEC server via wireless channels. Specially, we formulate task offloading, user privacy preservation and mining profit as a joint optimization problem which is modelled as a Markov decision process, where our objective is to minimize the long-term system offloading utility and maximize the privacy levels for all blockchain users. We first propose a reinforcement learning (RL)-based offloading scheme which enables MUs to make optimal offloading decisions based on blockchain transaction states, wireless channel qualities between MUs and MEC server and user’s power hash states. To further improve the offloading performances for larger-scale blockchain scenarios, we then develop a deep RL algorithm by using deep Q-network which can efficiently solve large state space without any prior knowledge of the system dynamics. Experiment and simulation results show that the proposed RL-based offloading schemes significantly enhance user privacy, and reduce the energy consumption as well as computation latency with minimum offloading costs in comparison with the benchmark offloading schemes.
Article
In this letter, we investigate the hybrid beamforming for millimeter wave massive multiple-input multiple-output (MIMO) system based on deep reinforcement learning (DRL). Imperfect channel state information (CSI) is assumed to be available at the base station (BS). To achieve high spectral efficiency with low time consumption, we propose a novel DRL-based method called PrecoderNet to design the digital precoder and analog combiner. The DRL agent takes the digital beamformer and analog combiner of the previous learning iteration as state, and these matrices of current learning iteration as action. Simulation results demonstrate that the PrecoderNet performs well in spectral efficiency, bit error rate (BER), as well as time consumption, and is robust to the CSI imperfection.
Article
Federated learning (FL), as a type of distributed machine learning, is capable of significantly preserving clients’ private data from being exposed to adversaries. Nevertheless, private information can still be divulged by analyzing uploaded parameters from clients, e.g., weights trained in deep neural networks. In this paper, to effectively prevent information leakage, we propose a novel framework based on the concept of differential privacy (DP), in which artificial noise is added to parameters at the clients’ side before aggregating, namely, noising before model aggregation FL (NbAFL). First, we prove that the NbAFL can satisfy DP under distinct protection levels by properly adapting different variances of artificial noise. Then we develop a theoretical convergence bound on the loss function of the trained FL model in the NbAFL. Specifically, the theoretical bound reveals the following three key properties: 1) there is a tradeoff between convergence performance and privacy protection levels, i.e., better convergence performance leads to a lower protection level; 2) given a fixed privacy protection level, increasing the number N of overall clients participating in FL can improve the convergence performance; and 3) there is an optimal number aggregation times (communication rounds) in terms of convergence performance for a given protection level. Furthermore, we propose a K -client random scheduling strategy, where K ( 1K<N1\leq K< N ) clients are randomly selected from the N overall clients to participate in each aggregation. We also develop a corresponding convergence bound for the loss function in this case and the K -client random scheduling strategy also retains the above three properties. Moreover, we find that there is an optimal K that achieves the best convergence performance at a fixed privacy level. Evaluations demonstrate that our theoretical results are consistent with simulations, thereby facilitating the design of various privacy-preserving FL algorithms with different tradeoff requirements on convergence performance and privacy levels.