Figure 1 - available via license: CC BY
Content may be subject to copyright.
Source publication
Research on clouds has an enormous influence on sky sciences and related applications, and cloud classification plays an essential role in it. Much research has been conducted which includes both traditional machine learning approaches and deep learning approaches. Compared with traditional machine learning approaches, deep learning approaches achi...
Contexts in source publication
Context 1
... are very different from other pattern recognition algorithms in that CNNs combine both feature extraction and classification [32]. Figure 1 shows an example of a simple schematic representation of a basic CNN. This simple network consists of five different layers: an input layer, a convolution layer, a pooling layer, a fully-connected layer, and an output layer. ...
Context 2
... when we reduce the learning rate from 0.001 to a rate of 0.0001, the performance of both the training and the validation improve significantly and are very stable. Figure 10 demonstrates the effect of batch size on the training and validation history. With a big batch size, such as the batch size of 64, there are some small oscillating problems in the early epochs, and after a long time of training, these oscillations are eliminated. ...
Context 3
... also evaluated the performance of each of the ten standalone models on the test set. Figure 11 shows the accuracies of the accumulative ensemble models indicated by the blue stars and curve as well as the accuracies of the unique standalone models indicated by the red circles. When there is one model, the results of the ensemble model and the single model are always identical, and as the size of the model increases, the advantages of the ensemble are clearly revealed. ...
Context 4
... the number of models in a CNN ensemble is normally from 5 to 10 because more models make it more time consuming and computationally expensive to train. As shown in Figure 11, when there are five or more models, the blue curves of the ensembles showed accuracies that are better or comparable to those of single models in almost all cases. Fold 1, fold 2, and fold 5 had the same characteristics. ...
Citations
... Pre-trained VGG-16 Net for tongue image classification. review of advances in tongue image classification techniques for diabetes identification (Ghazwan H. Hussien) CNN Architecture for tongue image classification[20] ...
... CNN Basic architecture for features extraction[20]. ...
By grouping diabetics using tongue scans, therefore enabling non-invasive, economically priced, and efficient approaches to disease detection. Mostly focused on patient healthcare using medical diagnostics and early detection, research has evolved. This study issue has become more important since it supports early diabetes detection, helps clinicians and patients, and targets proactive treatments meant to reduce the condition. It helps doctors decide which important diabetes treatments to use because this metabolic disorder can damage many organs if it is not treated properly. Deep learning algorithms have made it possible to diagnose many diseases early, including diabetes, by processing and analyzing images of the tongue to classify diabetic patients. This makes it possible to combine feature extraction and pattern recognition. There are more people with diabetes around the world, and the number of new cases is also going up. People want accurate and reliable diagnostic tools, so they've made algorithms that look at pictures of tongues and get basic information from them. Using tongue photos, this paper presents a comprehensive review of current advancements in the classification and diagnosis of diabetes by focusing on developments in deep network designs, feature extraction techniques, assessment methods, and deep learning methodology. The approach uses tongue images to routinely analyze frequently used datasets and indicators for diabetes classification. It also covers the challenges faced and perhaps the routes of research to provide innovative ideas in this field.
... Rectified linear unit (ReLu) is used as the activation function. To avoid the model from overfitting, a regularisation technique 'dropout' is applied before the fully connected layer (Li et al. 2021a, b, c;Srivastava et al. 2014;Phung and Rhee 2019). The dropout layer is not used since the proposed model is simple and does not repeat the essential steps of the main CNN architecture (it is used only once). ...
Timely and accurate crop mapping is crucial for yield prediction, food security assessment and agricultural management. Convolutional neural networks (CNNs) have become powerful state-of-the-art methods in many fields, including crop type detection from satellite imagery. However, existing CNNs generally have large number of layers and filters that increase the computational cost and the number of parameters to be learned, which may not be convenient for the processing of time-series images. To that end, we propose a light CNN model in combination with parcel-based image analysis for crop classification from time-series images. The model was applied on two areas (Manisa and Kırklareli) in Türkiye using Sentinel-2 data. Classification results based on all bands of the time-series data had overall accuracies (OA) of 89.3% and 88.3%, respectively for Manisa and Kırklareli. The results based on the optimal bands selected through the Support Vector Machine–Recursive Feature Elimination (SVM-RFE) method had OA of 86.6% and 86.5%, respectively. The proposed model outperformed the VGG-16, ResNet-50, and U-Net models used for comparison. For Manisa and Kırklareli respectively, VGG-16 achieved OA of 86.0% and 86.5%, ResNet-50 achieved OA of 84.1% and 84.8%, and U-Net achieved OA of 82.2% and 81.9% based on all bands. Based on the optimal bands, VGG-16 achieved OA of 84.2% and 84.7%, ResNet-50 achieved OA of 82.4% and 83.1%, and U-Net achieved OA of 80.5% and 80.2%. The results suggest that the proposed model is promising for accurate and cost-effective crop classification from Sentinel-2 time-series imagery.
... CNN (Convolutional Neural Network Architecture)Source:[64] ...
The rates at which IoT is expanding are tremendous, literally touching our daily life experiences through various applications such as smart city, healthcare, agriculture and industrial automation among-couple others. From amongst a number of diverse types of data produced by IoT devices, image data has risen to the forefront as one of the most useful tools for real-time identification and decision making. The critical contribution of image processing and deep learning in improving IoT systems are discussed in this paper. Image acquisition, preprocessing, segmentation and feature extraction procedures form the basis for acquiring significant information from raw imagery data. The deep learning approaches such as CNNs, RNNs, transfer learning, makes classification feature extraction, object detection more accurate fully automated. These technologies have been incorporated and used in traffic monitoring application, medical diagnosis, environmental monitoring, and fault diagnosis in industries. Nonetheless, issues of resource availability, temporal delay and data security act as barriers to the adoption of microservices especially in the edges and fogs of computing. To overcome these constraints, enhancement on lightweight deep Learning, Edge AI and privacy protection methodologies are being advanced for efficient, secure and real time performance. Hence, such trends as federated learning and 5G technologies can also define the future of image processing based on IoT systems. This paper systematically and critically reviews recent advances towards the application of image processing and deep learning on IoT based architectures by providing insight into its profile, challenges and future trends. It is meant to guide researchers and industry experts who are working on building smarter scalable and efficient IoT systems.
... This process is repeated until each fold has been used as validation data exactly once. This technique, commonly known as K-Fold crossvalidation, ensures a thorough and reliable evaluation of the model's performance [14,15]. ...
Expanding exploration activities into new fields has significantly boosted oil production. Well logging is a key method in petroleum exploration, used to evaluate hydrocarbon zones by analyzing parameters such as gamma ray, porosity, density, resistivity, and wave propagation velocity. These parameters are displayed as vertical log curves against well depth. However, logging tools sometimes fail to capture formation parameters accurately, creating gaps in well log data. Sonic log data are particularly prone to such gaps, as they are newer and less common in older wells. To address missing data, machine learning algorithms, like gradient boosting, provide an effective solution. Gradient boosting employs an ensemble of decision trees, iteratively correcting errors to model complex data patterns. This method is especially suitable for handling the intricate nature of well log data. In this study, Python was used to develop predictions for missing data, demonstrating the capability of machine learning to enhance data reliability and improve petroleum exploration processes. By bridging data gaps, machine learning ensures more accurate assessments of hydrocarbon zones, supporting better exploration outcomes.
... A CNN model for small dataset regularization techniques and model average ensemble enhance generalization and classification accuracy in cloud categorization research 17 . Evaluation using the SWIMCAT dataset demonstrates perfect classification accuracy highlighting the model's tenacity 18 . An MCUa dynamic deep CNN model classifies breast histology images using multilevel context-aware models and uncertainty quantification achieving high accuracy by addressing categorization challenges due to visual heterogeneity and lack of contextual information in large digital data 19 . ...
Model optimization is a problem of great concern and challenge for developing an image classification model. In image classification, selecting the appropriate hyperparameters can substantially boost the model’s ability to learn intricate patterns and features from complex image data. Hyperparameter optimization helps to prevent overfitting by finding the right balance between complexity and generalization of a model. The ensemble genetic algorithm and convolutional neural network (EGACNN) are proposed to enhance image classification by fine-tuning hyperparameters. The convolutional neural network (CNN) model is combined with a genetic algorithm GA) using stacking based on the Modified National Institute of Standards and Technology (MNIST) dataset to enhance efficiency and prediction rate on image classification. The GA optimizes the number of layers, kernel size, learning rates, dropout rates, and batch sizes of the CNN model to improve the accuracy and performance of the model. The objective of this research is to improve the CNN-based image classification system by utilizing the advantages of ensemble learning and GA. The highest accuracy is obtained using the proposed EGACNN model which is 99.91% and the ensemble CNN and spiking neural network (CSNN) model shows an accuracy of 99.68%. The ensemble approaches like EGACNN and CSNN tends to be more effective as compared to CNN, RNN, AlexNet, ResNet, and VGG models. The hyperparameter optimization of deep learning classification models reduces human efforts and produces better prediction results. Performance comparison with existing approaches also shows the superior performance of the proposed model.
... Architecture of CNN[65]. ...
The proliferation of Internet of Things (IoT) devices has brought about an increased threat of botnet attacks, necessitating robust security measures. In response to this evolving landscape, deep learning (DL)-based intrusion detection systems (IDS) have emerged as a promising approach for detecting and mitigating botnet activities in IoT environments. Therefore, this paper thoroughly reviews existing literature on botnet detection in the IoT using DL-based IDS. It consolidates and analyzes a wide range of research papers, highlighting key findings, methodologies, advancements, shortcomings, and challenges in the field. Additionally, we performed a qualitative comparison with existing surveys using author-defined metrics to underscore the uniqueness of this survey. We also discuss challenges, limitations, and future research directions, emphasizing the distinctive contributions of our review. Ultimately, this survey serves as a guideline for future researchers, contributing to the advancement of botnet detection methods in IoT environments and enhancing security against botnet threats.
... Traditional ML-based methods are initially used to develop classifiers like SVM (Saleem et al., 2021a) and Local Binary Pattern (LBP) using local descriptors (Thangaraj et al., 2022), Local Ternary Pattern (LTP) (Vishnoi et al., 2022), SIFT and SURF (Domingues et al., 2022). Numerous studies have been conducted on these methods for the classification of pests (Aziz et al., 2019;Aurangzeb et al., 2020;Phung and Rhee, 2019). Less training data is required for a handcrafted computing approach, but it requires the expertise of qualified human professionals. ...
... Schematic diagram of a basic CNN architecture[13]. ...
The human skin, which is the biggest organ and the outermost layer of the body, has seven layers that serve to shield interior organs. Because of its broad role in the integumentary system, maintaining the health of the skin is essential. Skin problems present substantial classification challenges for medical professionals since they include a wide spectrum of diseases, including dermatoses. Consequently, they are depending more and more on machine learning (ML) technologies to help them predict and categorize these diseases. In the field of imaging, convolutional neural networks (CNNs) have demonstrated performance that is comparable to, and in some cases surpasses, human capabilities. Within this research, we propose a novel CNN architecture designed to classify two specific skin diseases: Eczema (symptoms on legs and hands) and Seborrheic Keratoses (symptoms on ears and skin). Additionally, we compare the performance of six ML algorithms to determine the most accurate model. We trained and tested our proposed technique on the Dermnet 2021 DATASET, which consists of 2,332 pictures and is publically available on Kaggle. Our findings show that the suggested CNN model, which achieves an accuracy of 91.1% and an F1-score of 92.3%, surpasses other cutting-edge techniques. With an F1-score of 79.12% and an accuracy of 78.41%, Linear Regression (LR) was the most successful ML model that was examined.
... CNN structure[9] ...
In the era of big data and artificial intelligence, traditional practices in accounting, banking, auditing, and financial management are facing growing limitations. Emerging technologies, including deep learning and data analytics, are providing innovative solutions to overcome these challenges. In accounting and auditing, artificial intelligence techniques such as convolutional neural networks (CNNs) enhance fraud detection, contract analysis, and record verification, significantly improving accuracy and efficiency. In the financial sector, AI-driven tools enhance risk management, marketing, customer service, and the credit approval process. The integration of blockchain and smart contracts ensures transparency, security, and efficiency in financial transactions, particularly in credit assessment. Governments and businesses are leveraging big data platforms to improve governance and operational efficiency, especially in financial auditing, where cloud computing optimizes document management and audit quality. This paper explores the transformative role of these technologies in reshaping the financial landscape, offering advanced solutions to long-standing challenges.
... A structure of a CNN (from[136]). ...
The work presented in this Ph.D. thesis focuses on developing detection and recognition systems for diseases that affect agricultural crops and plantations based on deep learning and machine learning. The proposed detection and recognition systems in the literature offer precise solutions for early identification and effective management. However, they still face several challenges and they still limited in their uses. The first challenge is their lack of robustness and generalization. Their use is limited to the types of crops and diseases encountered during the learning process, leading to problems when faced with new types of crops and diseases not seen in the training phase. The second challenge is that the majority of these systems are designed to detect only one disease at a time and do not address the problem of simultaneous multi-disease detection. Another challenge is that most proposed systems detect diseases from leaves, which is common because leaves are often the first place where diseases appear in plants. However, some diseases infect the tree branches and do not appear on the leaves. To tackle these challenges, we have introduced three plant disease recognition systems. The first system is based on deep learning, capable of distinguishing between healthy and diseased leaves regardless of the crop type and disease, even if the system hasn't encountered them during the training phase. The primary idea is to prioritize the identification of diseased small leaf regions rather than solely relying on the overall appearance of the diseased leaf. Moreover, it includes assessing the disease's prevalence rate across the entire leaf. To ensure efficient classification, we employ a Small Inception model architecture, which is adept at processing small regions without compromising performance. The second proposed system is a deep learning-based model designed to detect and recognize multiple diseases simultaneously from any crop type, including those not encountered during the training process. Our method enables the independent recognition of each disease's symptoms within small leaf regions, regardless of the presence of other diseases on the same leaf and irrespective of the crop type. This is accomplished through the isolation method, which isolates each region containing specific disease symptoms and eliminates the influence of crop characteristics, in conjunction with the Small Inception model architecture. Additionally, our approach enables the calculation of the prevalence rate of each disease on the leaf and the determination of the overall extent of all diseases present on the leaf. The third system is a machine learning-based approach designed for detecting and segmenting diseases from tree branches, specifically targeting Nectria cinnabarina disease on apple tree branches. This system utilizes image processing techniques to enhance image quality and precision, thus facilitating the task for the Gaussian Mixture Model classifier. This classifier learns the probability distribution of the disease color and generates a segmentation mask, which is then utilized to identify the diseased areas on the branch. The results obtained from the experiments on the three systems demonstrate the effectiveness of the proposed methods in enhancing the accuracy of plant disease detection. Moreover, they outperformed existing methods by successfully identifying diseases across various crop types, detecting multiple diseases simultaneously from the same leaf, and accurately identifying and segmenting diseases from branches.