Figure - available from: Applied Sciences
This content is subject to copyright.
Source publication
Research on clouds has an enormous influence on sky sciences and related applications, and cloud classification plays an essential role in it. Much research has been conducted which includes both traditional machine learning approaches and deep learning approaches. Compared with traditional machine learning approaches, deep learning approaches achi...
Citations
... Pre-trained VGG-16 Net for tongue image classification. review of advances in tongue image classification techniques for diabetes identification (Ghazwan H. Hussien) CNN Architecture for tongue image classification[20] ...
... CNN Basic architecture for features extraction[20]. ...
By grouping diabetics using tongue scans, therefore enabling non-invasive, economically priced, and efficient approaches to disease detection. Mostly focused on patient healthcare using medical diagnostics and early detection, research has evolved. This study issue has become more important since it supports early diabetes detection, helps clinicians and patients, and targets proactive treatments meant to reduce the condition. It helps doctors decide which important diabetes treatments to use because this metabolic disorder can damage many organs if it is not treated properly. Deep learning algorithms have made it possible to diagnose many diseases early, including diabetes, by processing and analyzing images of the tongue to classify diabetic patients. This makes it possible to combine feature extraction and pattern recognition. There are more people with diabetes around the world, and the number of new cases is also going up. People want accurate and reliable diagnostic tools, so they've made algorithms that look at pictures of tongues and get basic information from them. Using tongue photos, this paper presents a comprehensive review of current advancements in the classification and diagnosis of diabetes by focusing on developments in deep network designs, feature extraction techniques, assessment methods, and deep learning methodology. The approach uses tongue images to routinely analyze frequently used datasets and indicators for diabetes classification. It also covers the challenges faced and perhaps the routes of research to provide innovative ideas in this field.
... Here, the'Softmax' function is used. The average of the predicted probability classes is calculated as shown in equation Equation 1 79 . ...
The most common causes of spine fractures, or vertebral column fractures (VCF), are traumas like falls, injuries from sports, or accidents. CT scans are affordable and effective at detecting VCF types in an accurate manner. VCF type identification in cervical, thoracic, and lumbar (C3-L5) regions is limited and sensitive to inter-observer variability. To solve this problem, this work introduces an autonomous approach for identifying VCF type by developing a novel ensemble model of Vision Transformers (ViT) and best-performing deep learning (DL) models. It assists orthopaedicians in easy and early identification of VCF types. The performance of numerous fine-tuned DL architectures, including VGG16, ResNet50, and DenseNet121, was investigated, and an ensemble classification model was developed to identify the best-performing combination of DL models. A ViT model is also trained to identify VCF. Later, the best-performing DL models and ViT were fused by weighted average technique for type identification. To overcome data limitations, an extended Deep Convolutional Generative Adversarial Network (DCGAN) and Progressive Growing Generative Adversarial Network (PGGAN) were developed. The VGG16-ResNet50-ViT ensemble model outperformed all ensemble models and got an accuracy of 89.98%. Extended DCGAN and PGGAN augmentation increased the accuracy of type identification to 90.28% and 93.68%, respectively. This demonstrates efficacy of PGGANs in augmenting VCF images. The study emphasizes the distinctive contributions of the ResNet50, VGG16, and ViT models in feature extraction, generalization, and global shape-based pattern capturing in VCF type identification. CT scans collected from a tertiary care hospital are used to validate these models.
... This study used accuracy, precision, sensitivity, and F1-score values to evaluate the classification performance based on the feature sets [50,51]. ...
With the impact of advancing technology, the automatic detection of human emotions is of great interest in various industries. Emotion recognition systems from facial images are important to meet the needs of various industries in a wide range of application areas, such as security, marketing, advertising, and human-computer interaction. In this study, automatic facial expression detection of 7 different emotions (anger, disgust, fear, happy, neutral, sad, and surprised) from facial image data has been performed. The process steps of the study are as follows: (i) preprocessing the image data with image grayscale and image enhancement methods, (ii) feature extraction by applying Gradient Histogram, Haar Wavelet, and Gabor filter methods to the preprocessed image, (iii) modeling the feature sets obtained from three different feature extraction methods with Convolutional Neural Network method, (iv) calculating the most successful feature extraction method in the detection of 7 different emotions with Convolutional Neural Network. As a result of the experimental studies, it has been determined that the Gabor filter feature extraction method is thriving with an accuracy rate of 83.12%. When the results of these methods are compared with other studies, the model developed contributes to the literature by making a difference in recognition rate, dataset size, and feature engineering methods.
... Rectified linear unit (ReLu) is used as the activation function. To avoid the model from overfitting, a regularisation technique 'dropout' is applied before the fully connected layer (Li et al. 2021a, b, c;Srivastava et al. 2014;Phung and Rhee 2019). The dropout layer is not used since the proposed model is simple and does not repeat the essential steps of the main CNN architecture (it is used only once). ...
Timely and accurate crop mapping is crucial for yield prediction, food security assessment and agricultural management. Convolutional neural networks (CNNs) have become powerful state-of-the-art methods in many fields, including crop type detection from satellite imagery. However, existing CNNs generally have large number of layers and filters that increase the computational cost and the number of parameters to be learned, which may not be convenient for the processing of time-series images. To that end, we propose a light CNN model in combination with parcel-based image analysis for crop classification from time-series images. The model was applied on two areas (Manisa and Kırklareli) in Türkiye using Sentinel-2 data. Classification results based on all bands of the time-series data had overall accuracies (OA) of 89.3% and 88.3%, respectively for Manisa and Kırklareli. The results based on the optimal bands selected through the Support Vector Machine–Recursive Feature Elimination (SVM-RFE) method had OA of 86.6% and 86.5%, respectively. The proposed model outperformed the VGG-16, ResNet-50, and U-Net models used for comparison. For Manisa and Kırklareli respectively, VGG-16 achieved OA of 86.0% and 86.5%, ResNet-50 achieved OA of 84.1% and 84.8%, and U-Net achieved OA of 82.2% and 81.9% based on all bands. Based on the optimal bands, VGG-16 achieved OA of 84.2% and 84.7%, ResNet-50 achieved OA of 82.4% and 83.1%, and U-Net achieved OA of 80.5% and 80.2%. The results suggest that the proposed model is promising for accurate and cost-effective crop classification from Sentinel-2 time-series imagery.
... CNN (Convolutional Neural Network Architecture)Source:[64] ...
The rates at which IoT is expanding are tremendous, literally touching our daily life experiences through various applications such as smart city, healthcare, agriculture and industrial automation among-couple others. From amongst a number of diverse types of data produced by IoT devices, image data has risen to the forefront as one of the most useful tools for real-time identification and decision making. The critical contribution of image processing and deep learning in improving IoT systems are discussed in this paper. Image acquisition, preprocessing, segmentation and feature extraction procedures form the basis for acquiring significant information from raw imagery data. The deep learning approaches such as CNNs, RNNs, transfer learning, makes classification feature extraction, object detection more accurate fully automated. These technologies have been incorporated and used in traffic monitoring application, medical diagnosis, environmental monitoring, and fault diagnosis in industries. Nonetheless, issues of resource availability, temporal delay and data security act as barriers to the adoption of microservices especially in the edges and fogs of computing. To overcome these constraints, enhancement on lightweight deep Learning, Edge AI and privacy protection methodologies are being advanced for efficient, secure and real time performance. Hence, such trends as federated learning and 5G technologies can also define the future of image processing based on IoT systems. This paper systematically and critically reviews recent advances towards the application of image processing and deep learning on IoT based architectures by providing insight into its profile, challenges and future trends. It is meant to guide researchers and industry experts who are working on building smarter scalable and efficient IoT systems.
... This process is repeated until each fold has been used as validation data exactly once. This technique, commonly known as K-Fold crossvalidation, ensures a thorough and reliable evaluation of the model's performance [14,15]. ...
Expanding exploration activities into new fields has significantly boosted oil production. Well logging is a key method in petroleum exploration, used to evaluate hydrocarbon zones by analyzing parameters such as gamma ray, porosity, density, resistivity, and wave propagation velocity. These parameters are displayed as vertical log curves against well depth. However, logging tools sometimes fail to capture formation parameters accurately, creating gaps in well log data. Sonic log data are particularly prone to such gaps, as they are newer and less common in older wells. To address missing data, machine learning algorithms, like gradient boosting, provide an effective solution. Gradient boosting employs an ensemble of decision trees, iteratively correcting errors to model complex data patterns. This method is especially suitable for handling the intricate nature of well log data. In this study, Python was used to develop predictions for missing data, demonstrating the capability of machine learning to enhance data reliability and improve petroleum exploration processes. By bridging data gaps, machine learning ensures more accurate assessments of hydrocarbon zones, supporting better exploration outcomes.
... A CNN model for small dataset regularization techniques and model average ensemble enhance generalization and classification accuracy in cloud categorization research 17 . Evaluation using the SWIMCAT dataset demonstrates perfect classification accuracy highlighting the model's tenacity 18 . An MCUa dynamic deep CNN model classifies breast histology images using multilevel context-aware models and uncertainty quantification achieving high accuracy by addressing categorization challenges due to visual heterogeneity and lack of contextual information in large digital data 19 . ...
Model optimization is a problem of great concern and challenge for developing an image classification model. In image classification, selecting the appropriate hyperparameters can substantially boost the model’s ability to learn intricate patterns and features from complex image data. Hyperparameter optimization helps to prevent overfitting by finding the right balance between complexity and generalization of a model. The ensemble genetic algorithm and convolutional neural network (EGACNN) are proposed to enhance image classification by fine-tuning hyperparameters. The convolutional neural network (CNN) model is combined with a genetic algorithm GA) using stacking based on the Modified National Institute of Standards and Technology (MNIST) dataset to enhance efficiency and prediction rate on image classification. The GA optimizes the number of layers, kernel size, learning rates, dropout rates, and batch sizes of the CNN model to improve the accuracy and performance of the model. The objective of this research is to improve the CNN-based image classification system by utilizing the advantages of ensemble learning and GA. The highest accuracy is obtained using the proposed EGACNN model which is 99.91% and the ensemble CNN and spiking neural network (CSNN) model shows an accuracy of 99.68%. The ensemble approaches like EGACNN and CSNN tends to be more effective as compared to CNN, RNN, AlexNet, ResNet, and VGG models. The hyperparameter optimization of deep learning classification models reduces human efforts and produces better prediction results. Performance comparison with existing approaches also shows the superior performance of the proposed model.
... Architecture of CNN[65]. ...
The proliferation of Internet of Things (IoT) devices has brought about an increased threat of botnet attacks, necessitating robust security measures. In response to this evolving landscape, deep learning (DL)-based intrusion detection systems (IDS) have emerged as a promising approach for detecting and mitigating botnet activities in IoT environments. Therefore, this paper thoroughly reviews existing literature on botnet detection in the IoT using DL-based IDS. It consolidates and analyzes a wide range of research papers, highlighting key findings, methodologies, advancements, shortcomings, and challenges in the field. Additionally, we performed a qualitative comparison with existing surveys using author-defined metrics to underscore the uniqueness of this survey. We also discuss challenges, limitations, and future research directions, emphasizing the distinctive contributions of our review. Ultimately, this survey serves as a guideline for future researchers, contributing to the advancement of botnet detection methods in IoT environments and enhancing security against botnet threats.
... For example, Zhang et al. [13] introduced a CNN model evolved from AlexNet, which can effectively classify ten cloud genus and one contrail class from ground-based cloud images using CNNs. Phung et al. [14] designed suitable CNN models for small datasets and employed regularization techniques to enhance model universality as well as avoid overfitting. Manzo et al. [15] utilized transfer learning and a voting mechanism for cloud image classification. ...
... Traditional ML-based methods are initially used to develop classifiers like SVM (Saleem et al., 2021a) and Local Binary Pattern (LBP) using local descriptors (Thangaraj et al., 2022), Local Ternary Pattern (LTP) (Vishnoi et al., 2022), SIFT and SURF (Domingues et al., 2022). Numerous studies have been conducted on these methods for the classification of pests (Aziz et al., 2019;Aurangzeb et al., 2020;Phung and Rhee, 2019). Less training data is required for a handcrafted computing approach, but it requires the expertise of qualified human professionals. ...