January 2025
·
9 Reads
International Journal of Bioinformatics Research and Applications
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
January 2025
·
9 Reads
International Journal of Bioinformatics Research and Applications
January 2025
·
8 Reads
International Journal of Bioinformatics Research and Applications
October 2024
·
26 Reads
Journal of Machine and Computing
Across the globe, people are working to build "smart cities" that will employ technology to make people's lives better and safer. Installing cameras at strategic spots across the city to monitor public spaces besides provide real-time footage to law enforcement besides other local authorities is a crucial part of smart city infrastructure, which includes video surveillance. A more effective answer is provided by deep learning algorithms, however research in this area still faces significant problems from changes in target size, form change, occlusion, and illumination circumstances as seen from the drone's perspective. In light of the aforementioned issues, this study presents a highly effective and resilient approach for aerial picture identification. To begin, the concept of Bi-PAN-FPN is presented to enhance the neck component of YOLOv8-s, taking into consideration the prevalent issue of small targets being easily misdetected or ignored in aerial photos. We achieve a more advanced and thorough feature fusion procedure much as feasible by completely considering and reusing multiscale features. To further reduce the amount of parameters in the model and prevent info loss during long-distance feature transfer, the benchmark model's backbone incorporates the GhostblockV2 structure in lieu of a portion of the C2f module. With the help of the Enhanced Dwarf Mongoose Optimization Algorithm (EDMOA), the suggested model's hyper-parameters are optimised. Lastly, a dynamic nonmonotonic focusing mechanism is employed in conjunction with WiseIoU loss as bounding box regression loss. The detector accounts for varying anchor box quality by utilizing "outlier" evaluations, thus improving the complete presentation of the detection task.
December 2023
·
68 Reads
·
1 Citation
May 2023
·
480 Reads
·
4 Citations
International Journal of Electrical and Electronics Research
The first stage is to extract fine details from a picture using Red Green Blue (RGB) colour space is colour image segmentation. Most grayscale and colour picture segmentation algorithms use original or updated fuzzy c-means (FCM) clustering. However, due to two factors, the majority of these methods are inefficient and fail to produce the acceptable segmentation results for colour photos. The inclusion of local spatial information often results in a high level of computational complexity due to the repetitive distance computation between clustering centres and pixels within a tiny adjacent window. The second reason is that a typical neighbouring window tends to mess up the local spatial structure of images. Color picture segmentation has been improved by introducing Deep Convolution Neural Networks (CNNs) for object detection, classification and semantic segmentation. This study seeks to build a light-weight for object detector that uses a depth and colour image from a publically available dataset to identify objects in a scene. It's likely to output in the depth way by expanding the YOLO network's network architecture. Using Taylor based Cat Salp Swarm algorithm (TCSSA), the weight of the suggested model is modified to improve the accuracy of region extraction findings. It is possible to test the detector's efficacy by comparing it to various datasets. Testing showed that the suggested model is capable of segmenting input into multiple metrics using bounding boxes. The results shows that the proposed model achieved 0.20 of Global Consistency Error (GCE) and 1.85 of Variation of Information (VOI) on BSDS500 dataset, where existing techniques achieved nearly 1.96 to 1.86 of VOI and 0.25 to 0.22 of GCE for the same dataset.
July 2022
·
112 Reads
·
5 Citations
Neural Computing and Applications
Image segmentation is one of the most significant tasks in image analysis, and it plays an imperative job in image processing to analyse and attain meaningful information. Moreover, image segmentation is a major process of object recognition and categorization in computer vision domain. Image segmentation utilizes the image features for separating images into definite areas along with exclusive properties. Meanwhile, various colour image segmentation techniques are introduced in computer vision research area. However, these techniques are more time consuming and failed to afford anticipated segmentation outcome, because of poor segmentation results and high computational difficulty. To overcome these challenges, an effectual hybrid optimization-based Deep Learning (DL) technique is devised for colour image segmentation and classification in this research study. The median filter is applied for input image to eliminate the noises, which assists for better image segmentation and classification process. Moreover, Improved Invasive Weed Flower Pollination Optimization (IIWFPO) approach is introduced for image segmentation process in this work. In addition, Deep Residual Network (DRN) classifier is employed for image classification, and the classifier is trained by developed Fr-IIWFPO algorithm. The developed colour image segmentation and classification approach obtained better performance than traditional techniques with accuracy of 0.9187, sensitivity of 0.9334, and specificity of 0.8902.
July 2022
·
44 Reads
·
9 Citations
Applied Intelligence
Image classification becomes a popular research area in computer vision due to the increasing development of image indexing and retrieval tasks. This paper proposes a Flower Henry Gas Solubility Optimization-based Deep Convolution Neural Network (FHGSO-based Deep CNN) for image classification. Initially, the input image is pre-processed through the median filter. Then, the segmentation is performed using the Improved Invasive Weed Flower Pollination Optimization (IIWFPO)-based SegNet. IIWFPO is the integration of the Improved invasive weed optimization (IWO) algorithm and Flower Pollination Algorithm (FPA). Finally, image classification is performed using FHGSO-based Deep CNN. The FHGSO algorithm is developed by integrating the FPA and Henry Gas Solubility Optimization (HGSO) algorithm. The performance of the proposed method is analyzed using the Stanford background dataset and compared with the other image classification methods. The proposed model obtained the value of 0.938, 0.955, and 0.907 for testing accuracy, sensitivity, and specificity, respectively.
... In forestry management, the models rely on RGB images. RGB images are digital images that use a color model that combines red, green, and blue light to create a wide spectrum of colors [4]. One of the main challenges with using RGB images for forestry applications is their inability to capture all the necessary details in highly cluttered and occluded environments. ...
May 2023
International Journal of Electrical and Electronics Research
... It has one standard convolution (Conv) block followed by thirteen depthwise separable convolution blocks. The convolution is a filtering operation that extracts feature maps by filtering each pixel with its Kernel/filter weights [38,39]. The early convolution layers extract the low-level features like edges, corners, etc., and the later layers extract the high-level features like size, shape, etc. ...
July 2022
Neural Computing and Applications
... Over the past decade, deep learning has appeared in widespread applications in various data science fields (Bao et al., 2019;Caicedo et al., 2019), such as object detection (Apostolopoulos & Tzani, 2023;Gupta, Anpalagan, Guan, & Khwaja, 2021;Sharma & Mir, 2020;Singh & Desai, 2023;Zaidi et al., 2022;Zou, Chen, Shi, Guo, & Ye, 2023), and feature extraction (Apostolopoulos & Tzani, 2023;Dhiman et al., 2023;Kaur & Sharma, 2023;Mutlag, Ali, Aydam, & Taher, 2020;Sarigul, Ozyildirim, & Avci, 2020;Xiao, Chen, Gong, & Zhou, 2020) included in Artificial Intelligence (AI) (Ali et al., 2023;Almotairi et al., 2023;Kobrinskii, 2023;Ngombu, Binol, Gurcan, & Moberly, 2023;Wang et al., 2023). One of the most successful deep learning architectures for computer vision tasks is the Convolutional Neural Network (CNN) (Bharadiya, 2023;Deepa & Rasi, 2023;Moutik et al., 2023;Taye, 2023) . CNNs integrate feature extraction and classification parts, this eliminates is the differential convolution method (Abd El Kader et al., 2021;Dişken, 2023;Qu, Xu, Ding, Jia, & Sun, 2020;Sarıgül et al., 2019). ...
July 2022
Applied Intelligence