Ghadah Naif Alwakid’s research while affiliated with College of Computer and Information Sciences and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (43)


Sequential architectural flow of the proposed AI-SASC model
Flowchart of the AI-SASC model: (a) Semantic analysis-based multi-facet route fitness. (b) Genetic Algorithm with sentiment-enhanced fitness for route optimization
Comparison of AI-SASC, EMBTR, and DABPR for packet drop ratio under varying nodes and distances
Comparison of AI-SASC, EMBTR, and DABPR for end-to-end delay under varying nodes and distances
Comparison of AI-SASC, EMBTR, and DABPR for end-to-end delay under varying nodes and distances

+4

AI-Driven Sentiment-Enhanced Secure IoT Communication Model Using Resilience Behavior Analysis
  • Article
  • Full-text available

June 2025

·

6 Reads

Menwa Alshammeri

·

·

Khalid Haseeb

·

Ghadah Naif Alwakid

Wireless technologies and the Internet of Things (IoT) are being extensively utilized for advanced development in traditional communication systems. This evolution lowers the cost of the extensive use of sensors, changing the way devices interact and communicate in dynamic and uncertain situations. Such a constantly evolving environment presents enormous challenges to preserving a secure and lightweight IoT system. Therefore, it leads to the design of effective and trusted routing to support sustainable smart cities. This research study proposed a Genetic Algorithm sentiment-enhanced secured optimization model, which combines big data analytics and analysis rules to evaluate user feedback. The sentiment analysis is utilized to assess the perception of network performance, allowing the classification of device behavior as positive, neutral, or negative. By integrating sentiment-driven insights, the IoT network adjusts the system configurations to enhance the performance using network behaviour in terms of latency, reliability, fault tolerance, and sentiment score. Accordingly to the analysis, the proposed model categorizes the behavior of devices as positive, neutral, or negative, facilitating real-time monitoring for crucial applications. Experimental results revealed a significant improvement in the proposed model for threat prevention and network efficiency, demonstrating its resilience for real-time IoT applications.

Download

Optimized Autonomous Computing With Trusted Resilient Data Management for IoT Emerging Networks

June 2025

·

5 Reads

IEEE Sensors Journal

·

Benmao Cheng

·

Bo Zhao

·

[...]

·

The innovative city network integrates numerous computational and physical components to develop real-time systems. These systems can capture sensor data and distribute it to end stations. Most solutions have been presented based on the centralized computing paradigm, which effectively and systematically increases data flow; however, distributed wireless technologies and heterogeneous network services continue to raise significant research problems. These challenges lower the optimization criteria and reflect communication structure around the network edges. In this research, we proposed a sustainable development for smart networks using efficient big data management with collaborative decisions for network devices. It differs from most existing work in the mentioned aspect. It applies computational lightweight intelligence for forwarding collected data using mobile collectors and reduces the congestion flow between devices on the limited bandwidth of wireless links. Moreover, the energy load is efficiently managed with edge-driven methods, and the incorporation of trusted devices ensures the integrity of the smart network. It also tackles potential communication threats smartly. Based on the experiments conducted in Network Simulator (NS-3), the proposed model enhances the efficacy of smart networks for performance metrics with reliability and effective management of resources in Internet of Things (IoT) network.



Integrating Edge Intelligence with Blockchain-Driven Secured IoT Healthcare Optimization Model

April 2025

·

15 Reads

The Internet of Things (IoT) and edge computing have substantially contributed to the development and growth of smart cities. It handled time-constrained services and mobile devices to capture the observing environment for surveillance applications. These systems are composed of wireless cameras, digital devices, and tiny sensors to facilitate the operations of crucial healthcare services. Recently, many interactive applications have been proposed, including integrating intelligent systems to handle data processing and enable dynamic communication functionalities for crucial IoT services. Nonetheless, most solutions lack optimizing relaying methods and impose excessive overheads for maintaining devices’ connectivity. Alternatively, data integrity and trust are another vital consideration for next-generation networks. This research proposed a load-balanced trusted surveillance routing model with collaborative decisions at network edges to enhance energy management and resource balancing. It leverages graph-based optimization to enable reliable analysis of decision-making parameters. Furthermore, mobile devices integrate with the proposed model to sustain trusted routes with lightweight privacy-preserving and authentication. The proposed model analyzed its performance results in a simulation-based environment and illustrated an exceptional improvement in packet loss ratio, energy consumption, detection anomaly, and blockchain overhead than related solutions.



Intrusion detection in smart grids using artificial intelligence-based ensemble modelling

February 2025

·

99 Reads

Cluster Computing

For efficient distribution of electric power, the demand for Smart Grids (SGs) has dramatically increased in recent times. However, in SGs, a safe environment against cyber threats is also a concern. This paper proposes a novel Fog-based Artificial Intelligence (AI) framework for SG Networks. It uses Machine Learning (ML) and Deep Learning (DL)-based ensemble models to enhance the accuracy of detecting intrusions in SG networks. This work has two main goals, which include addressing class imbalance in network intrusion detection datasets and building interpretable models for targeted security interventions. It is achieved by using ensemble modeling, such as Logistic Regression (LR), Random Forest (RF), K-Nearest Neighbors (KNN) for ML-based ensemble, while the DL ensembles consist of aggregated neural network models trained using TensorFlow. The paper assess their effectiveness in identifying malicious activities in the SG network traffic. The present study utilizes a large dataset that was custom-designed for SG intrusion detection. Most of the previous studies explored different ML techniques using a single dataset; however, the performance improvement by ensemble modeling has not been explored intensively. Therefore, this paper bridges this research gap by suggesting a novel ML-based ensemble model for intrusion detection using two datasets: CIC-IDS-Collection and a specifically designed Power System Intrusion dataset. This study has made benchmark results demonstrating the effectiveness of the proposed ensemble models for intrusion detection in SGs. Results demonstrated better accuracy, precision, recall, and F1 Scores for the proposed ensemble models over the two datasets. The accuracy, precision, recall, and F1 Scores for the proposed Ensemble model 1 for the CIC-IDS Collection dataset are 98.57%, 98.75%, 99.00%, and 98.25% and for the Power System dataset they are 98.75%, 99.05%, 99.20%, and 99.10%, respectively. Similarly, for the proposed Ensemble model 2 for the CIC-IDS Collection dataset, we have 98.84%, 99.00%, 99.00%, and 99.00% accuracy, precision, recall, and F1 Score values. For the Power System dataset, these values are 99.05%, 99.30%, 99.25%, and 99.27% for the mentioned parameters.


Optimized machine learning framework for cardiovascular disease diagnosis: a novel ethical perspective

February 2025

·

34 Reads

BMC Cardiovascular Disorders

Alignment of advanced cutting-edge technologies such as Artificial Intelligence (AI) has emerged as a significant driving force to achieve greater precision and timeliness in identifying cardiovascular diseases (CVDs). However, it is difficult to achieve high accuracy and reliability in CVD diagnostics due to complex clinical data and the selection and modeling process of useful features. Therefore, this paper studies advanced AI-based feature selection techniques and the application of AI technologies in the CVD classification. It uses methodologies such as Chi-square, Info Gain, Forward Selection, and Backward Elimination as an essence of cardiovascular health indicators into a refined eight-feature subset. This study emphasizes ethical considerations, including transparency, interpretability, and bias mitigation. This is achieved by employing unbiased datasets, fair feature selection techniques, and rigorous validation metrics to ensure fairness and trustworthiness in the AI-based diagnostic process. In addition, the integration of various Machine Learning (ML) models, encompassing Random Forest (RF), XGBoost, Decision Trees (DT), and Logistic Regression (LR), facilitates a comprehensive exploration of predictive performance. Among this diverse range of models, XGBoost stands out as the top performer, achieving exceptional scores with a 99% accuracy rate, 100% recall, 99% F1-measure, and 99% precision. Furthermore, we venture into dimensionality reduction, applying Principal Component Analysis (PCA) to the eight-feature subset, effectively refining it to a compact six-attribute feature subset. Once again, XGBoost shines as the model of choice, yielding outstanding results. It achieves accuracy, recall, F1-measure, and precision scores of 98%, 100%, 98%, and 97%, respectively, when applied to the feature subset derived from the combination of Chi-square and Forward Selection methods.


Automated Detection and Severity Prediction of Wheat Rust Using Cost‐Effective Xception Architecture

February 2025

·

23 Reads

·

13 Citations

Wheat crop production is under constant threat from leaf and stripe rust, an airborne fungal disease caused by the pathogen Puccinia triticina . Early detection and efficient crop phenotyping are crucial for managing and controlling the spread of this disease in susceptible wheat varieties. Current detection methods are predominantly manual and labour‐intensive. Traditional strategies such as cultivating resistant varieties, applying fungicides and practicing good agricultural techniques often fall short in effectively identifying and responding to wheat rust outbreaks. To address these challenges, we propose an innovative computer vision‐based disease severity prediction pipeline. Our approach utilizes a deep learning‐based classifier to differentiate between healthy and rust‐infected wheat leaves. Upon identifying an infected leaf, we apply Grabcut‐based segmentation to isolate the foreground mask. This mask is then processed in the CIELAB color space to distinguish leaf rust stripes and spores. The disease severity ratio is calculated to measure the extent of infection on each test leaf. This paper introduces a ground‐breaking disease severity prediction method, offering a low‐cost, accessible and automated solution for wheat rust disease screening in field conditions using digital colour images. Our approach represents a significant advancement in crop disease management, promising timely interventions and better control measures for wheat rust.



Obfuscated Malware Detection and Classification in Network Traffic Leveraging Hybrid Large Language Models and Synthetic Data

January 2025

·

159 Reads

·

1 Citation

Android malware detection remains a critical issue for mobile security. Cybercriminals target Android since it is the most popular smartphone operating system (OS). Malware detection, analysis, and classification have become diverse research areas. This paper presents a smart sensing model based on large language models (LLMs) for developing and classifying network traffic-based Android malware. The network traffic that constantly connects Android apps may contain harmful components that may damage these apps. However, one of the main challenges in developing smart sensing systems for malware analysis is the scarcity of traffic data due to privacy concerns. To overcome this, a two-step smart sensing model Syn-detect is proposed. The first step involves generating synthetic TCP malware traffic data with malicious content using GPT-2. These data are then preprocessed and used in the second step, which focuses on malware classification. This phase leverages a fine-tuned LLM, Bidirectional Encoder Representations from Transformers (BERT), with classification layers. BERT is responsible for tokenization, generating word embeddings, and classifying malware. The Syn-detect model was tested on two Android malware datasets: CIC-AndMal2017 and CIC-AAGM2017. The model achieved an accuracy of 99.8% on CIC-AndMal2017 and 99.3% on CIC-AAGM2017. The Matthew’s Correlation Coefficient (MCC) values for the predictions were 99% for CIC-AndMal2017 and 98% for CIC-AAGM2017. These results demonstrate the strong performance of the Syn-detect smart sensing model. Compared to the latest research in Android malware classification, the model outperformed other approaches, delivering promising results.


Citations (20)


... Similarly, [10] used the same dataset and obtained a 96.40% accuracy using the ResNet50 model. Two recent studies [46], and [48] utilized ResNet50 and Xception models respectively using the same datasets. ...

Reference:

An Enhanced Wheat Stripe Rust Segmentation Approach Using Vision Transformer Model
Automated Detection and Severity Prediction of Wheat Rust Using Cost‐Effective Xception Architecture

... In the context of accessibility, ML-based approaches have been developed to monitor disabled pilgrims during Hajj and Umrah using activity recognition and anomaly detection, with both RF and SNN models achieving 93% accuracy [58]. Moreover, a review [59] on Arabic text classification highlights ongoing challenges in data augmentation, particularly for long texts, emphasizing the need for better benchmark datasets and effective augmentation strategies tailored to the Arabic language. ...

Machine Learning-Integrated Usability Evaluation and Monitoring of Human Activities for Individuals With Special Needs During Hajj and Umrah

IEEE Access

... The effectiveness of hybrid approaches that leverage static and dynamic analysis for obfuscated malware detection has been demonstrated in resource-constrained environments, such as IoT devices [53], and similar techniques could enhance ransomware detection capabilities. Furthermore, leveraging synthetic data generation techniques, such as those used in Android malware detection with large language models [54], may help improve dataset diversity and model robustness against novel threats. ...

Obfuscated Malware Detection and Classification in Network Traffic Leveraging Hybrid Large Language Models and Synthetic Data

... The MCME model combined the multi-scale CNNs with a new cost function, achieving an accuracy of 98.86% and an AUC of 0.9985 [30]. Transfer learning models such as MobileNetV2 gave excellent results, achieving 100% on a Kaggle OCT dataset [31]. The MDBL-Net used a multi-view, multi-scale-based feature extraction strategy to achieve maximum accuracy with relatively less training time [32]. ...

A Comparative Investigation of Transfer Learning Frameworks Using OCT Pictures for Retinal Disorder Identification

IEEE Access

... Currently, artificial intelligence (AI) is gaining increasing importance and establishing itself as a strategic tool in various fields, including the environmental field [1][2][3][4]. With the rapid development of technologies and the increasing of environmental challenges, artificial intelligence (AI) is starting to play an increasingly important role in the sustainable management of natural resources [5][6][7][8]. ...

Securing the Internet of Things in Artificial Intelligence Era: A Comprehensive Survey

IEEE Access

... Researching future smart healthcare systems and advanced connectivity roles in delivering better healthcare remains essential 9 . Humayun et al. (2024) present Smart, Secure, and Energy-efficient Health Care Edge Technology (SSEHCET) as an integrated AI and mobile edge computing solution to boost eHealth security and efficiency. The research demonstrates that these technologies provide enormous power to upgrade healthcare management capabilities. ...

Transformative synergy: SSEHCET—bridging mobile edge computing and AI for enhanced eHealth security and efficiency

Journal of Cloud Computing

... The model achieved impressive metrics, including 98.75% precision, 98.89% F1-score, and 97.89% recall, outperforming current methods in terms of accuracy and prediction performance. In [14], four scenarios using the APTOS dataset were tested with HIST, CLAHE, and ESRGAN. The CLAHE and ESRGAN combination achieved the highest accuracy of 97.83% with a CNN, matching experienced ophthalmologists. ...

Enhancing diabetic retinopathy classification using deep learning

... Large public datasets have enabled successful binary and multi-class classifications, but multi-class approaches face challenges (a) Normal Fundus (b) Diabetic Retinopathy Figure 1: Differences between Normal Fundus Image and Diabetic Retinopathy Fundus Image. [11] like data imbalance, imaging quality issues and suboptimal feature extraction [16,17,18,19,20,21,22,23]. To address these issues, preprocessing methods like contrast enhancement, generative modeling, attention based extraction have improved diagnostic accuracy [24]. Ensemble methods, transfer learning and hybrid classifications have advanced automated detection of critical DR indicators like microaneurysms, hemorrhages and lesion segmentation. ...

Deep learning-enhanced diabetic retinopathy image classification

... This section presents an overview of the current state-of-the-art, highlighting key methodologies, findings, and innovations from recent research in the field. To address class imbalance, Alwakid et al. [24] and Alwakid et al. [31] conducted experiments on APTOS and DDR datasets using DenseNet121 and advanced image enhancement techniques like ESRGAN, histogram equalization and CLAHE. They reported accuracy of 98.7% which shows the potential of high resolution image synthesis in improving the classification accuracy. ...

Enhancement of Diabetic Retinopathy Prognostication Using Deep Learning, CLAHE, and ESRGAN

... Brain tumors can significantly disrupt normal neurological processes, potentially affecting everything from motor skills and sensory perception to cognitive functions and personality. The impact Computer-Aided Diagnosis (CAD) challenges [12]. CNNs excel in various tasks crucial for brain tumor analysis, including image recognition, tumor classification, precise segmentation, and early detection. ...

Diagnosing Melanomas in Dermoscopy Images Using Deep Learning