Shen Wang’s research while affiliated with University College Dublin and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (52)


Fig. 2: Detailed architectures of typical DRL approaches
Fig. 4: The system architecture of resource management in cloud computing
Deep Reinforcement Learning for Job Scheduling and Resource Management in Cloud Computing: An Algorithm-Level Review
  • Preprint
  • File available

January 2025

·

105 Reads

Yan Gu

·

Zhaoze Liu

·

·

[...]

·

Cloud computing has revolutionized the provisioning of computing resources, offering scalable, flexible, and on-demand services to meet the diverse requirements of modern applications. At the heart of efficient cloud operations are job scheduling and resource management, which are critical for optimizing system performance and ensuring timely and cost-effective service delivery. However, the dynamic and heterogeneous nature of cloud environments presents significant challenges for these tasks, as workloads and resource availability can fluctuate unpredictably. Traditional approaches, including heuristic and meta-heuristic algorithms, often struggle to adapt to these real-time changes due to their reliance on static models or predefined rules. Deep Reinforcement Learning (DRL) has emerged as a promising solution to these challenges by enabling systems to learn and adapt policies based on continuous observations of the environment, facilitating intelligent and responsive decision-making. This survey provides a comprehensive review of DRL-based algorithms for job scheduling and resource management in cloud computing, analyzing their methodologies, performance metrics, and practical applications. We also highlight emerging trends and future research directions, offering valuable insights into leveraging DRL to advance both job scheduling and resource management in cloud computing.

Download

Fig. 5: Utilizable percentage of bandwidth
Spect-NFT: Non-Fungible Tokens for Dynamic Spectrum Management

December 2024

·

854 Reads

·

1 Citation

Dynamic Spectrum Sharing (DSS) is a pivotal technology for optimizing spectrum utilization and fostering efficient sharing among diverse users. However, existing DSS approaches face significant challenges related to security and privacy vulnerabilities, leading to fraudulent practices within spectrum marketplaces. In this paper, we introduce Spect-NFT, a novel framework leveraging Non Fungible Tokens (NFTs) to address these limitations and enhance the spectrum-sharing ecosystem's efficiency and revenue. Spect-NFT employs NFTs to authenticate ownership of spectrum bands, mitigating fraudulent activities and improving trust among participants. Additionally, Spect-NFT introduces digital Permission Tokens (PTs) to facilitate seamless spectrum sharing between primary users (PUs) and secondary users (SUs) enabling the sharing of a single NFT among multiple owners. We present a methodology for converting spectrum licenses into NFTs and demonstrate the feasibility of our approach using the Ethereum Blockchain. Our proof of concept solution showcases Spect-NFT's tamper-resistant characteristics and its potential to revolutionize DSS paradigms.


Fig. 1. Modified D3QN Architecture
Fig. 2. Teacher Model (Top) vs Student Model (Bottom)
Fig. 3. Experimental Setup for Nvidia Jetson Orin Nano (same for Nvidia Jetson Nano)
Fig. 4. 3D and Top View of Gazebo Environments: (a) Train, (b) Test A, (c) Test B
Fig. 5. PyTorch to TensorRT process for (a) baseline model, (b) pruned/knowledge distilled models and (c) post-training quantized model weight pruned model TensorRT engine and unpruned model TensorRT engine were effectively the same. We found that when layer pruning was applied, even removing only one layer affected the models performance too much. Finally, filter and channel pruning approaches just affect the convolutional layers and we found that a big percentage reduction in just the convolutional layer portion of our D3QN model using channel or filter pruning only reduced the model size by a small amount. In contrast, a big percentage reduction in fully connected layer size via neuron pruning reduced the model size considerably. With both these reductions the amount of episodes to re-train was approximately the same and the inference latency reduction was much higher for the fully connected layer pruned models. The specific neuron pruning approach applied was 'structured L1 norm pruning' where the pruning was just applied to the fully connected layers of the D3QN model. Just fully connected layers 1 and 2 were focused on as these layers had the vast majority of the neurons. 'Structured L1 norm pruning' applied as a neuron pruning approach removes the neurons of a neural network according to the L1 norm of their weights. In other words, the sum of the absolute values of the weights associated with each neuron is calculated and those with the smallest sums (i.e., the lowest L1 norms) are removed. We found an efficient way to neuron prune our model to be as described Algorithm 1. Algorithm 1 is a custom method we developed for neuron pruning and re-training a RL model for robot (e.g., UAV) navigation problems where the initial RL model has been trained in multiple environments.
Towards Latency Efficient DRL Inference: Improving UAV Obstacle Avoidance at the Edge Through Model Compression

September 2024

·

335 Reads

It is vital that autonomous Unmanned Aerial Vehicles (UAVs) are able to avoid obstacles effectively. When avoiding such obstacles it is important that movement decision are made fast (i.e. inference latency is low) so that crashes are avoided. When deep reinforcement learning (DRL) is being leveraged as the method of obstacle avoidance one way of reducing this inference latency is to deploy the DRL model at the edge (e.g., on-UAV). However, even if the DRL model is small enough to be deployed on-UAV, the inference latency can be too high. There is a lack of research that examines reducing DRL inference time of UAVs when avoiding obstacles. Thus, this paper examines various model compression techniques to improve the inference speed of such DRL models deployed at the edge. We propose an approach that combines these model compression techniques and apply it to a well performing Dueling Double Deep Q-Network (D3QN) baseline model. On the Nvidia Jetson Orin Nano and Nvidia Jetson Nano edge devices we show that, relative to our baseline model, this combined model compression approach reduces inference latency by 38.61% and 53.18% while only reducing the success rate by 2.34% and 5% respectively. All our code is open-sourced.


A Survey on XAI for 5G and Beyond Security: Technical Aspects, Challenges and Research Directions

July 2024

·

324 Reads

·

26 Citations

IEEE Communications Surveys & Tutorials

With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems is envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are immensely popular in service layer applications and have been proposed as essential enablers in many aspects of 5G and beyond networks, from IoT devices and edge computing to cloud-based infrastructures. However, existing 5G ML-based security surveys tend to emphasize AI/ML model performance and accuracy more than the models' accountability and trustworthiness. In contrast, this paper explores the potential of Explainable AI (XAI) methods, which would allow stakehold-ers in 5G and beyond to inspect intelligent black-box systems used to secure next-generation networks. The goal of using XAI in the security domain of 5G and beyond is to allow the decision-making processes of ML-based security systems to be transparent and comprehensible to 5G and beyond stakeholders, making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as ORAN, zero-touch network management, and end-to-end slicing, this survey emphasizes the role of XAI in them that the general users would ultimately enjoy. Furthermore, we presented the lessons from recent efforts and future research directions on top of the currently conducted projects involving XAI.



Fig. 1: AI perturbations based on algorithm type and attack.
Fig. 3: Vulnerabilities against machine learning systems.
Fig. 5: SPATIAL concept overview.
Fig. 6: Use case 1 results (Medical application); Effect of label flipping based on (i) accuracy, (ii) precision, (iii) recall; and (iv) poisoning quantification using SHAP dissimilarity
The SPATIAL Architecture: Design and Development Experiences from Gauging and Monitoring the AI Inference Capabilities of Modern Applications

July 2024

·

458 Reads

·

1 Citation

Despite its enormous economical and societal impact , lack of human-perceived control and safety is redefining the design and development of emerging AI-based technologies. New regulatory requirements mandate increased human control and oversight of AI, transforming the development practices and responsibilities of individuals interacting with AI. In this paper, we present the SPATIAL architecture, a system that augments modern applications with capabilities to gauge and monitor trustworthy properties of AI inference capabilities. To design SPATIAL, we first explore the evolution of modern system architectures and how AI components and pipelines are integrated. With this information, we then develop a proof-of-concept architecture that analyzes AI models in a human-in-the-loop manner. SPATIAL provides an AI dashboard for allowing individuals interacting with applications to obtain quantifiable insights about the AI decision process. This information is then used by human operators to comprehend possible issues that influence the performance of AI models and adjust or counter them. Through rigorous benchmarks and experiments in real-world industrial applications, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness, however, this in turn increases the complexity of developing and maintaining systems implementing AI. Our work highlights lessons learned and experiences from augmenting modern applications with mechanisms that support regulatory compliance of AI. In addition, we also present a road map of ongoing challenges that require attention to achieve robust trustworthy analysis of AI and greater engagement of human oversight.


Fig. 1: ROBUST-6G High Level Architecture
Fig. 3: Observe, Analyze, Decide, Act loop IV. ZERO-TOUCH MANAGEMENT FOR SECURE 6G
Fig. 4: Overview of the use cases and their key workflows and interactions among multiple components
ROBUST-6G: Smart, Automated, and Reliable Security Service Platform for 6G

July 2024

·

347 Reads

·

3 Citations

In the progressive development towards 6G, the ROBUST-6G initiative aims to provide fundamental contributions to developing data-driven, AI/ML-based security solutions to meet the new concerns posed by the dynamic nature of forthcoming 6G services and networks in the future cyber-physical continuum. This aim has to be accompanied by the transversal objective of protecting AI/ML systems from security attacks and ensuring the privacy of individuals whose data are used in AI-empowered systems. ROBUST-6G will essentially investigate the security and robustness of distributed intelligence, enhancing privacy and providing transparency by leveraging explainable AI/ML (XAI). Another objective of ROBUST-6G is to promote green and sustainable AI/ML methodologies to achieve energy efficiency in 6G network design. The vision of ROBUST-6G is to optimize the computation requirements and minimize the consumed energy while providing the necessary performance for AI/ML-driven security functionalities; this will enable sustainable solutions across society while suppressing any adverse effects. This paper aims to initiate the discussion and to highlight the key goals and milestones of ROBUST-6G, which are important for investigation towards a trustworthy and secure vision for future 6G networks.


Fig. 2: Accuracy variation of FL rounds without vs. with DP for NN architectures and local rounds for NSL-KDD.
Fig. 5: t-SNE clusters for different configurations of perplexities over multiple rounds for MNIST
Navigating Explainable Privacy in Federated Learning

June 2024

·

144 Reads

·

1 Citation

With the dawn of distributed Artificial Intelligence (AI) accelerated with the upcoming Beyond 5G (B5G)/6G networks, Federated Learning (FL) is emerging as an innovative approach to performing distributed learning in a privacy-preserved manner. Numerous techniques are available for fine-tuning AI-based parameters in FL. Depending on factors such as specifications, use cases, and limitations, this tuning method can vary among different FL processes, which is essential to consider when designing and developing FL-based systems. Therefore, in this research, we conduct an investigation on the variation and trade-offs of such AI-based parameters, with metrics including user diversity analysis using t-SNE-based projections, and we also examine AI parameter performance from a Differential Privacy (DP) perspective. In addition, we expand our study to Explainable Artificial Intelligence (XAI)-based tuning and use SHAP and LIME methods to decipher distributed model complexity. We assess the interpretability of FL models by leveraging parameters like consistency, currentness, and compacity, which provide insight into their effectiveness and resilience in practical settings.


Fig. 4. The evolution of FL model accuracy with the increasing level of DP privacy protection (the increasing value of 1 ϵ means from "no privacy" to increased privacy-protection level).
Fig. 5. The trend of averaged cosine similarly among prediction vectors from all clients, over the increasing FL training epoch.
Towards Accountable and Resilient AI-Assisted Networks: Case Studies and Future Challenges

June 2024

·

313 Reads

·

3 Citations

Artificial Intelligence (AI) will play a critical role in future networks, exploiting real-time data collection for optimized utilization of network resources. However, current AI solutions predominantly emphasize model performance enhancement, engendering substantial risk when AI encounters irregularities such as adversarial attacks or unknown misbehaves due to its "black-box" decision process. Consequently, AI-driven network solutions necessitate enhanced accountability to stakeholders and robust resilience against known AI threats. This paper introduces a high-level process, integrating Explainable AI (XAI) techniques and illustrating their application across three typical use cases: encrypted network traffic classification, malware detection, and federated learning. Unlike existing task-specific qualitative approaches, the proposed process incorporates a new set of metrics, measuring model performance, explainability, security, and privacy, thus enabling users to iteratively refine their AI network solutions. The paper also elucidates future research challenges we deem critical to the actualization of trustworthy, AI-empowered networks.


Unlocking Spectrum Potential: A Blockchain-Powered Paradigm for Dynamic Spectrum Management

June 2024

·

264 Reads

·

1 Citation

The rise of mobile users, IoT devices, and data-intensive applications has led to an unprecedented surge in spectrum demand. However, current Spectrum Sharing (SS) methods, characterized by centralized control and inflexible architectures, fall short of meeting this escalating challenge. The solutions of Dynamic Spectrum Access (DSA) and Dynamic Spectrum Management (DSM) have emerged to enhance spectral efficiency and facilitate novel services in the realm of 5G networks. The successful implementation of DSM requires rapid sensing, coordination, and management, all the while upholding stringent standards for security and privacy. Unfortunately, existing DSM approaches relying on spectrum databases and Cognitive Radio (CR) techniques face issues related to reliability, security, and privacy. Blockchain (BC) emerges as a promising solution for decentralized DSM, offering superior security and privacy capabilities. Distinct features of BC, such as Smart Contracts (SCs), empower the establishment of complex Service Level Agreements (SLA) among operators. Additionally, the utilization of tokens within BC ensures a reliable and trustworthy environment for spectrum trading. Furthermore, BC's seamless integration with artificial intelligence (AI) and related Machine Learning (ML) techniques presents an opportunity to automate and enhance the adaptability of DSM frameworks. Despite the potential, there exist research gaps that warrant attention. This paper aims to comprehensively analyze BC-based DSM as the primary solution to DSM challenges and offers clear future directions for addressing BC deployment challenges.


Citations (37)


... 16 Update slice prediction model parameters using an optimizer through backpropagation. 17 Update based on the prediction results. 18 Return for legitimate traffic as the final predicted slices. ...

Reference:

Detection and Prediction of Inter-Slice Handover DDoS Attacks in 5G and Beyond Networks Using P4 and Deep Learning
A Survey on XAI for 5G and Beyond Security: Technical Aspects, Challenges and Research Directions

IEEE Communications Surveys & Tutorials

... This approach enables systems to make complex decisions in real-time without relying on cloud-based resources, which can introduce latency and dependency on network infrastructure. [15] The below flowchart visually represents the streamlined process of real-time decision-making using Multimodal AI on edge devices. It begins with data collection from sensors and cameras, followed by preprocessing tasks such as filtering and normalization. ...

CoTV: Cooperative Control for Traffic Light Signals and Connected Autonomous Vehicles Using Deep Reinforcement Learning
  • Citing Conference Paper
  • June 2024

... The challenge is to balance the degree of privacy protection with an acceptable level of accuracy, as overly stringent privacy measures may render models less effective in practice, undermining the potential of 6G to enable AI-driven automation. In [231], the accuracy of FL models is heavily penalized with the Gaussian DP, where the accuracy of the aggregated models without DP reaches over 90%, while when using DP, it does not converge the models, and the overall accuracy remains less than 10%. Figure 9 provides the overall accuracy differences over 10 FL iterations as presented in [231]. ...

Navigating Explainable Privacy in Federated Learning

... c: Distributed data handling 6G networks will rely on edge computing, where data is processed at the network edge. This distributed nature makes it more difficult to maintain data security and consistency across nodes, complicating the task of generating secure and privacy-compliant explanations [20], [228]. ...

ROBUST-6G: Smart, Automated, and Reliable Security Service Platform for 6G

... Data Privacy Sandeepa et al. (2023) revealed that transferring AI into the interpretation of vital statistics presupposes unearthing numerous privacy-related issues in managing personal health information. Health data involves some highly personal information, which requires measures to be taken to protect the individual's identity and the data from unauthorized access and disclosure (Duckert et al., 2022). ...

A Survey on Privacy of Personal and Non-Personal Data in B5G/6G Networks

ACM Computing Surveys

... One proposed solution is a framework that integrates privacy-preserving techniques with secure model protection [14], safeguarding both data privacy and the models' intellectual property. Another solution, SHERPA [15], uses explainable AI (XAI) and federated learning to defend against data poisoning attacks, with the ability to detect malicious clients even when up to 80% of participants are compromised, significantly improving robustness in federated environments. Additionally, the IFed framework enhances federated learning for the Power IoT with local differential privacy (LDP) [16], offering privacy protection while maintaining model performance through dynamic privacy budget allocation. ...

SHERPA: Explainable Robust Algorithms for Privacy-Preserved Federated Learning in Future Networks to Defend Against Data Poisoning Attacks

... Spectrum management is another critical aspect of improving 5G network performance. Efficient allocation and utilization of available spectrum resources can help mitigate congestion and interference [259]. Implementing dynamic spectrum sharing techniques allows operators to use spectrum more flexibly and efficiently, adapting to varying demand patterns in real-time. ...

Unlocking Spectrum Potential: A Blockchain-Powered Paradigm for Dynamic Spectrum Management

... These systems continuously monitor network traffic, system behavior, and data patterns to detect anomalies indicative of malware. Leveraging machine learning and deep learning algorithms, they analyze vast amounts of data to recognize potential threats, adapting to new forms of malware through adaptive learning mechanisms [43]. Additionally, real-time detection often incorporates signature based and behavior based approaches to enhance accuracy and response speed. ...

Towards Accountable and Resilient AI-Assisted Networks: Case Studies and Future Challenges

... In the ASCM problem, the presence of long-term objective (14) and long-term constraint (15) significantly enhances the complexity of the optimization problem, where the effects of decisions accumulate over multiple time slots. Moreover, the dynamic nature of the system further complicates matters since the spectrum availability fluctuates significantly as CEDs and CBSs coexist with PUs. ...

A Survey On Blockchain for Dynamic Spectrum Sharing

IEEE Open Journal of the Communications Society

... To address this issue, some researchers have employed Reinforcement Learning techniques, as seen in studies such as [3,14,16,40]. A few of studies also consider the mobility of devices such as [20,27,31,38]. ...

Urban Traffic Signal Control at the Edge: An Ontology-Enhanced Deep Reinforcement Learning Approach
  • Citing Conference Paper
  • September 2023