Kaleem Razzaq Malik’s research while affiliated with Air University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (37)


A new approach of anomaly detection in shopping center surveillance videos for theft prevention based on RLCNN model
  • Article

June 2025

·

15 Reads

·

·

Kaleem Razzaq Malik

·

[...]

·

Ayed Alwadain

The amount of video data produced daily by today’s surveillance systems is enormous, making analysis difficult for computer vision specialists. It is challenging to continuously search these massive video streams for unexpected accidents because they occur seldom and have little chance of being observed. Contrarily, deep learning-based anomaly detection decreases the need for human labor and has comparably trustworthy decision-making capabilities, hence promoting public safety. In this article, we introduce a system for efficient anomaly detection that can function in surveillance networks with a modest level of complexity. The proposed method starts by obtaining spatiotemporal features from a group of frames. The multi-layer extended short-term memory model can precisely identify continuing unusual activity in complicated video scenarios of a busy shopping mall once we transmit the in-depth features extracted. We conducted in-depth tests on numerous benchmark datasets for anomaly detection to confirm the proposed framework’s functionality in challenging surveillance scenarios. Compared to state-of-the-art techniques, our datasets, UCF50, UCF101, UCFYouTube, and UCFCustomized, provided better training and increased accuracy. Our model was trained for more classes than usual, and when the proposed model, RLCNN, was tested for those classes, the results were encouraging. All of our datasets worked admirably. However, when we used the UCFCustomized and UCFYouTube datasets compared to other UCF datasets, we achieved greater accuracy of 96 and 97, respectively.


DCT basic function.
Block diagram illustrating the steps involved in the secret data embedding process using DCT and chaotic sequence generation.
Illustration of the Zig-Zag pattern used for traversing DCT coefficients.
Illustration of the encoder–decoder GAN network for secret image embedding.
Hybrid steganography approach utilizing DCT and GANs.

+3

A hybrid steganography framework using DCT and GAN for secure data communication in the big data era
  • Article
  • Full-text available

June 2025

·

27 Reads

The growth of the internet and big data has spurred the demand for more extensive information hoarding to store and distribute information. In today’s digital era, ensuring the security of data transmission is paramount. Advancements in digital technology have facilitated the proliferation of high-resolution graphics over the Internet, raising security concerns and enabling unauthorized access to sensitive data. Researchers have increasingly explored steganography as a reliable method for secure communication because it plays a crucial role in concealing and safeguarding sensitive information. This study introduces a novel and comprehensive steganography framework using the discrete cosine transform (DCT) and the deep learning algorithm, generative adversarial network. By leveraging deep learning techniques in both spatial and frequency domains, the proposed hybrid architecture offers a robust solution for applications requiring high levels of data integrity and security. While conventional steganography methods are typically classified into spatial and transform domains, extensive research and analysis demonstrate that the hybrid approach surpasses individual techniques in performance. The experimental results validate the effectiveness of the proposed steganography approach, showcasing superior visual image quality with a mean square error (MSE) of 93.30%, peak signal-to-noise ratio (PSNR) of 58.27%, root mean squared error (RMSE) of 96.10%, and structural similarity index measure (SSIM) of 94.20%, in comparison to existing leading methodologies. The proposed model achieved reconstruction accuracies of 96.2% using Xu Net and 95.7% with SR Net. By combining DCT with deep learning algorithms, the proposed approach overcomes the limitations of spatial domain methods, offering a more flexible and effective steganography solution. Furthermore, simulation results confirm that the proposed technique outperforms state-of-the-art methods across key performance metrics, including MSE, PSNR, SSIM, and RMSE.

Download

Leveraging two-dimensional pre-trained vision transformers for three-dimensional model generation via masked autoencoders

January 2025

·

48 Reads

·

2 Citations

Although the Transformer architecture has established itself as the industry standard for jobs involving natural language processing, it still has few uses in computer vision. In vision, attention is used in conjunction with convolutional networks or to replace individual convolutional network elements while preserving the overall network design. Differences between the two domains, such as significant variations in the scale of visual things and the higher granularity of pixels in images compared to words in the text, make it difficult to transfer Transformer from language to vision. Masking autoencoding is a promising self-supervised learning approach that greatly advances computer vision and natural language processing. For robust 2D representations, pre-training with large image data has become standard practice. On the other hand, the low availability of 3D datasets significantly impedes learning high-quality 3D features because of the high data processing cost. We present a strong multi-scale MAE prior training architecture that uses a trained ViT and a 3D representation model from 2D images to let 3D point clouds learn on their own. We employ the adept 2D information to direct a 3D masking-based autoencoder, which uses an encoder-decoder architecture to rebuild the masked point tokens through self-supervised pre-training. To acquire the input point cloud’s multi-view visual characteristics, we first use pre-trained 2D models. Next, we present a two-dimensional masking method that preserves the visibility of semantically significant point tokens. Numerous tests demonstrate how effectively our method works with pre-trained models and how well it generalizes to a range of downstream tasks. In particular, our pre-trained model achieved 93.63% accuracy for linear SVM on ScanObjectNN and 91.31% accuracy on ModelNet40. Our approach demonstrates how a straightforward architecture solely based on conventional transformers may outperform specialized transformer models from supervised learning.


Next-generation diabetes diagnosis and personalized diet-activity management: A hybrid ensemble paradigm

January 2025

·

30 Reads

·

1 Citation

Diabetes, a chronic metabolic condition characterised by persistently high blood sugar levels, necessitates early detection to mitigate its risks. Inadequate dietary choices can contribute to various health complications, emphasising the importance of personalised nutrition interventions. However, real-time selection of diets tailored to individual nutritional needs is challenging because of the intricate nature of foods and the abundance of dietary sources. Because diabetes is a chronic condition, patients with this illness must choose a healthy diet. Patients with diabetes frequently need to visit their doctor and rely on expensive medications to manage their condition. It is challenging to purchase medication for chronic illnesses on a regular basis in underdeveloped nations. Motivated by this concept, we suggest a hybrid model that, rather than depending solely on medication to evade a visit to the doctor, can first anticipate diabetes and then suggest a diet and exercise regimen. This research proposes an optimized approach by harnessing machine learning classifiers, including Random Forest, Support Vector Machine, and XGBoost, to develop a robust framework for accurate diabetes prediction. The study addresses the difficulties in predicting diabetes precisely from limited labeled data and outliers in diabetes datasets. Furthermore, a thorough food and exercise recommender system is unveiled, offering individualized and health-conscious nutrition recommendations based on user preferences and medical information. Leveraging efficient learning and inference techniques, the study achieves a meager error rate of less than 30% using an extensive dataset comprising over 100 million user-rated foods. This research underscores the significance of integrating machine learning classifiers with personalized nutritional recommendations to enhance diabetes prediction and management. The proposed framework has substantial potential to facilitate early detection, provide tailored dietary guidance, and alleviate the economic burden associated with diabetes-related healthcare expenses.


Enhancing intrusion detection: a hybrid machine and deep learning approach

July 2024

·

717 Reads

·

76 Citations

Journal of Cloud Computing

The volume of data transferred across communication infrastructures has recently increased due to technological advancements in cloud computing, the Internet of Things (IoT), and automobile networks. The network systems transmit diverse and heterogeneous data in dispersed environments as communication technology develops. The communications using these networks and daily interactions depend on network security systems to provide secure and reliable information. On the other hand, attackers have increased their efforts to render systems on networks susceptible. An efficient intrusion detection system is essential since technological advancements embark on new kinds of attacks and security limitations. This paper implements a hybrid model for Intrusion Detection (ID) with Machine Learning (ML) and Deep Learning (DL) techniques to tackle these limitations. The proposed model makes use of Extreme Gradient Boosting (XGBoost) and convolutional neural networks (CNN) for feature extraction and then combines each of these with long short-term memory networks (LSTM) for classification. Four benchmark datasets CIC IDS 2017, UNSW NB15, NSL KDD, and WSN DS were used to train the model for binary and multi-class classification. With the increase in feature dimensions, current intrusion detection systems have trouble identifying new threats due to low test accuracy scores. To narrow down each dataset’s feature space, XGBoost, and CNN feature selection algorithms are used in this work for each separate model. The experimental findings demonstrate a high detection rate and good accuracy with a relatively low False Acceptance Rate (FAR) to prove the usefulness of the proposed hybrid model.


Computing of temperature-dependent thermal conductivity and viscosity correlation for solar energy and turbulence appliances via artificial neuro network algorithm

November 2023

·

64 Reads

·

7 Citations

The growing popularity of artificial intelligence approaches has led to their application in a wide range of engineering fields. The most widely used artificial intelligence tool, artificial neural networks, can be used to predict data with high accuracy. An artificial neural network approach is being used to predict effective and accurate thermal conductivity and viscosity models for hybrid nanofluid systems. Here, new types of correlations relating to the thermophysical properties of Fly Ash–Cu nanoparticles with diameter sized 15.2nm and which are temperature-dependent are developed. The highest thermal conductivity and viscosity values were obtained for hybrid nanofluids with a mixture ratio of 20:80, with maximum amplification exceeding 83.2% and 65%, respectively, over the base fluid. The Fly Ash–Cu/water hybrid nanofluid’s viscosity and thermal conductivity are evaluated for a concentration range of 0–4%. The evaluation of the Fly Ash–Cu/water hybrid nanofluids system at concentrations ranging from 0 to 4% most likely entails a scientific or engineering study aimed at understanding the behavior and properties of this nanofluid mixture. Nanoparticles can agglomerate or settle in the base fluid over time, compromising the stability of the nanofluids. Researchers may be interested in determining how varied quantities of Fly Ash and Cu nanoparticles affect the nanofluid’s stability and sedimentation behavior. The heat transfer potential is examined within the optimistic range of temperatures of 30–80∘C. Many fruitful results for turbulence and solar energy have been drawn. The Mouromtseff number achieved an optimal value for all concentration levels. The heat transfers of turbulent flow and thermal conductivity of hybrid nanofluids increase with the augmented values of concentrations and temperature. Researchers found an increase in thermal conductivity of hybrid nanofluids at 0–4% concentrations, potentially impacting heat transfer applications. The conclusion explores the potential integration of the developed correlations and neural network model into practical engineering or industrial applications involving solar energy and turbulence appliances. In this work, we extend the work of Kanti et al. [Sol. Energy Mater. Sol. Cells 234 (2022) 111423] which is on the properties of water-based fly ash-copper hybrid nanofluid for solar energy applications.


Comparison of Blackhole and Wormhole Attacks in Cloud MANET Enabled IoT for Agricultural Field Monitoring

April 2022

·

238 Reads

·

15 Citations

In Mobile Ad hoc Network (MANET) enabled Internet of Things (IoT) agricultural field monitoring, sensor devices are automatically connected and form an independent network that serves as a cloud for many services such as monitoring, securing, and properly maintaining. Cloud-based services in MANET models can prove to be an extremely effective way of smart agricultural functionalities for device-to-device information exchange. Security is a serious issue with Cloud-MANET-based IoT since nodes are scattered, mobile, and lacking centralized administrator, which makes it possible for data tampering and illegal actions on cloud servers. Therefore, these types of networks are more vulnerable to Denial of Service (DoS) attacks such as Blackhole and Wormhole. The MANET Enabled IoT-Agricultural Field Monitoring environment is deployed through a case study. The effect of Blackhole and Wormhole attacks is analyzed using the Ad hoc On-demand Distance Vector (AODV) routing protocol with the help of Network Simulator 3 (NS-3) in order to determine which has the most impact on network performance. We computed performance constraints such as throughput, packet delivery ratio (PDR), end-to-end delay (EED), and Jitter-Sum of preprocessed data gathered with the flow-monitor module of NS-3. The effect of attacks on MANET Enabled IoT-Agricultural Field Monitoring is compared on the varying number of nodes participating in the Cloud-MANET-based IoT network. The throughput and goodput capability of every node is computed through the trace metric package. This method is also highly useful for future Cloud-MANET-Based IoT smart agricultural field security research.



PrePass-Flow: A Machine Learning based technique to minimize ACL policy violation due to links failure in hybrid SDN

November 2020

·

194 Reads

·

43 Citations

Computer Networks

Abstract The centralized architecture of Software-Defined Networking (SDN) reduces networking complexity and improves network manageability by omitting the need for box-by-box troubleshooting and management. However, due to both budget constraints and maturity level of the SDN-capable devices, organizations often are reluctant to adopt SDN in practice. Therefore, instead of migrating to a pure SDN architecture, an incremental SDN deployment strategy is preferred in practice. In this paper, we consider an incremental SDN deployment strategy known as hybrid SDN - involving simultaneous use of both SDN switches and legacy switches. The links connected to an SDN switch are called SDN links, and the rest are called legacy links. An SDN controller can directly poll the status of the SDN links via the connected SDN switches. At the same time, the status of the legacy links passes through SDN switches and reaches the controller, causing delay. As a result, the controller does not have the current status of legacy links in real-time. This delay may lead to undesired outcomes. For example, it causes network reachability problems due to Access Control List (ACL) policies. Therefore, to minimize the impact of network-layer failure in hybrid SDN, we propose a Machine Learning (ML) based technique called PrePass-Flow. PrePass-Flow predicts link failures before their occurrence, recomputes the locations of ACL policies, and installs the ACL policies in the recomputed locations in a proactive manner. The main objective of PrePass-Flow is to minimize the ACL policy violations and network reachability problems due to ACL policies in case of link failures. For the link status prediction, PrePass-Flow uses two supervised ML-based models: 1) a Logistic Regression (LR) model, and 2) a Support Vector Machine (SVM) model. Testing results show that the LR model performs better than both the SVM model and an existing approach in terms of Packet Delivery Ratio (PDR) and ACL policy violations. For instance, the LR model’s accuracy is 4% better, precision is 5% higher, sensitivity is 10% better, and Area Under the Curve (AUC) is 6% greater than the SVM model’s corresponding results. Keywords Hybrid SDNMachine LearningACLLink Failure PredictionNetwork reachability


A Methodology of Bi-Directional Data Transformation in Emerging Research and Opportunities

September 2020

Relational Databases (RDBs) are largely employed, and so are often common as part of the car-rental age of software to both keep and retrieve information. RDBs are suitable to pay substantial data with no seeing about their semantics. Software dealing using RDB for information are conversant with this significance of information in their use from the system, but the semantics aren’t a portion of their information version. The typical frame provided within just Semantic Web (SW) provides ability into those approaches to reuse and share information across diverse applications and platforms together side symbolizing their information connection - era. Since SW continues to be generated to your internet and progress, it has proven to be more invaluable in a variety of locations, especially if information from other sources must be exchanged or coordinated. It is not as achievable to displace all of the procedures data in the RDF sort, because most software continues to be related to RDB centered information representation. This dependence simplifies the notion of why both of those data units are essential to pay current tendency toward information storage and recovery. A methodology will become necessary since effective at altering data amongst RDB and RDF and storing data undamaged. This technique will turn out to be favorable to approaches at case concentrated or spread to utilize equally data units deprived of anxieties of shift. Hence such an approach can lower the conceptual difference amongst RDB along with RDF information units, leading to forming a concerted atmosphere such as advanced and traditional technologies along with the app. Most Information Regarding company-oriented approaches continues to be predicated around the relational data version. About the flip side, Semantic Internet statistics version RDF has come to be the newest benchmark for information modelling and investigation. As a result of the example integration of RDB and RDF information units has turned into the mandatory characteristic of these processes. This issue was a recent age scorching research issue. Many services, such as languages and tools, have been supplied in the form of Transformation of information out of RDB into RDF.


Citations (25)


... Zhou et al. [80] developed a proactive drive failure prediction system using semi-supervised learning, improving accuracy and reliability. Sajid et al. [81] presented a hybrid machine and deep learning approach to enhance intrusion detection. ...

Reference:

Advances and Challenges in Cloud Data Storage Security: A Systematic Review
Enhancing intrusion detection: a hybrid machine and deep learning approach

Journal of Cloud Computing

... Artificial neural networks are used by Yaici et al. 56 to evaluate the solar energy storage intended for home space heating. The artificial neural network approach was used by Qureshi et al. 57 to predict precise and efficient models of hybrid nanofluids in heat transfer. The study of Ermis et al. 58 proposed an artificial neural network (ANN) utilizing feed-forward backpropagation to analyze the heat transmission in a finned-tube latent heat thermal energy storage device during a phase change. ...

Computing of temperature-dependent thermal conductivity and viscosity correlation for solar energy and turbulence appliances via artificial neuro network algorithm

... Investigations included the evaluation of protocol vulnerability and assault damage analysis for digital forensics [71]. [72] Blackhole, Wormhole ...

Comparison of Blackhole and Wormhole Attacks in Cloud MANET Enabled IoT for Agricultural Field Monitoring

... Beyond DoS, WSNs face routing-centric threats like the black hole attack, depicted in Figure 12. Here, a malicious node lures others to redirect data through it, acting as a sink that absorbs-or discards-transmissions, halting network flow [95,96]. Closely related to this is the wormhole attack, shown in Figure 13, where data are tunneled between two colluding nodes-say, from node A to node B-bypassing the intended routes [97,98]. ...

Performance Analysis of Blackhole and Wormhole Attack in MANET Based IoT

... Ibrar et al. 25 defined the machine learning (ML)-based techniques for predicting link failures in hybrid SDNs. This work introduced logistic regression (LR) and support vector machine (SVM)-based techniques for predicting link failures using access control list (ACL) policies. ...

PrePass-Flow: A Machine Learning based technique to minimize ACL policy violation due to links failure in hybrid SDN
  • Citing Preprint
  • November 2020

Computer Networks

... Different traffic infrastructure devices need to achieve interoperability between heterogeneous devices and between devices and systems by formulating relevant standard communication interface specifications and cooperative agreements. Traditional operating modes and interaction methods are challenging to meet the requirements of the current heterogeneous and complex EC-IoS environment [142]. ...

Semantic Interoperability for Context-Aware Autonomous Control using IoT and Edge Computing

... In this study, ResNet18 exhibited exceptional capability in extracting spatial and contextual features from diverse inputs, such as slope gradient, lithology, vegetation coverage, and annual precipitation, thereby enhancing the classification accuracy of collapse risk prediction. Furthermore, its integration with batch normalization and data augmentation strengthens model robustness, ensuring reliable performance across diverse remote sensing datasets [16,25,26]. ...

Data Augmentation to Stabilize Image Caption Generation Models in Deep Learning

International Journal of Advanced Computer Science and Applications

... AI systems can evaluate CT images and X-rays to identify lung nodules with a high degree of precision, often equaling or even surpassing the performance of human radiologists. ML in medical imaging has significantly enhanced object recognition and classification, while more recent efforts are directed at the collaboration between human experience and ML for the best outcomes [164]. Lung cancer is the most dangerous type of cancer worldwide, and the survival rates could be improved from 5% in advanced stages to over 50% if identified at initial stages. ...

Application of machine learning and image processing for detection of breast cancer
  • Citing Chapter
  • January 2020

... In this Age-of-Information (AoI), many organizations such as hospitals, banks, and multi-national companies release large amounts of e-health data for different research activities [2]. Analysis of these datasets provides data holders with the knowledge to perform informed decisions. ...

Data Transmission and Capacity over Efficient IoT Energy Consumption

... In [232], a model for image classification is proposed that is a combination of CNN and RNN called the CNN-RNN model. In [132], the authors demonstrated a hybrid approach based on image processing and speech recognition in IoV to automatically drive the vehicles. ...

Image and command hybrid model for vehicle control using Internet of Vehicles

Transactions on Emerging Telecommunications Technologies