Fig 8 - uploaded by Clifford K. Yang
Content may be subject to copyright.
Illustration of the network architecture of VGG-19 model: conv means convolution, FC means fully connected

Illustration of the network architecture of VGG-19 model: conv means convolution, FC means fully connected

Context in source publication

Context 1
... and Zisserman of the University of Oxford created a 19-layer (16 conv., 3 fully-connected) CNN that strictly used 3×3 filters with stride and pad of 1, along with 2×2 max-pooling layers with stride 2, called VGG-19 model. 28,29 Compared to AlexNet, the VGG-19 (see Fig. 8) is a deeper CNN with more layers. To reduce the number of parameters in such deep networks, it uses small 3×3 filters in all convolutional layers and best utilized with its 7.3% error rate. The VGG-19 model was not the winner of ILSVRC 30 2014, however, the VGG Net is one of the most influential papers because it reinforced the notion ...

Similar publications

Article
Full-text available
The radiation dose delivered to patients undergoing mammography examination is of utmost importance because of the risk of cancer induction due to the process. In this work, we analyze the dose to 109 patients (214 images) who underwent mammographic examinations with a full-field digital mammography (FFDM) system. Quality control assessment was fir...
Preprint
Full-text available
Deep learning models designed for visual classification tasks on natural images have become prevalent in medical image analysis. However, medical images differ from typical natural images in many ways, such as significantly higher resolutions and smaller regions of interest. Moreover, both the global structure and local details play important roles...
Article
Full-text available
There is uncertainty about the magnitude of the effect of screening mammography on breast cancer mortality. The relevance and validity of evidence from dated randomized controlled trials has been questioned, whereas observational studies often lack a valid comparison group. There is no estimate of the effect of one screening invitation only. We exp...
Article
Full-text available
To investigate the value of contrast-enhanced mammography (CEM) compared to full-field digital mammography (FFDM) in screening breast cancer patients after breast-conserving surgery (BCS), this Health Insurance Portability and Accountability Act-compliant, institutional review board-approved retrospective, single-institution study included 971 CEM...

Citations

... The architecture network of VGG-19 model[17]. ...
Article
Full-text available
In the aftermath of catastrophic natural catastrophes such as earthquakes, tsunamis, and explosions, providing immediate help to key areas can mean the difference between life and death for many individuals. To meet this critical demand, we created a hybrid transportation system that harnesses the power of vgg19 and traditional shortest path algorithms. The objective was to create a real-time system that could address these issues and provide a novel viewpoint. The suggested approach can precisely anticipate damaged roads and steer clear of them when determining the shortest way between sites during emergencies or natural disasters by merging VGG19 with the shortest path algorithm. With the potential to save many lives, this creative strategy can assist emergency responders reach vital places swiftly and effectively while also saving important time and resources. The experimental study shows that the proposed model can achieve robust results. In fact, our solution achieves 98% accuracy rate and a 0.972 G-Mean score on the test set.
... This model has a total of 138 million parameters. VGG19 is pre-trained on a subset of the ImageNet dataset [13], which is used in the ImageNet Large-Scale Visual Recognition Challenge, containing over a million images, enabling it to classify images across thousands of classes [13]. This extensive training has equipped VGG19 with the ability to capture detailed and varied feature representations from a wide range of images, thereby making it highly effective in feature extraction tasks. ...
... This model has a total of 138 million parameters. VGG19 is pre-trained on a subset of the ImageNet dataset [13], which is used in the ImageNet Large-Scale Visual Recognition Challenge, containing over a million images, enabling it to classify images across thousands of classes [13]. This extensive training has equipped VGG19 with the ability to capture detailed and varied feature representations from a wide range of images, thereby making it highly effective in feature extraction tasks. ...
Chapter
Full-text available
In the fashion retail e-commerce sector, personalized product recommendations are crucial for enhancing the shopping experience. This study introduces a method that combines a pre-trained deep learning model named VGG19 with the 10 nearest neighbors algorithm to recommend visually similar products. VGG19 is utilized to extract detailed features from product images, enabling more accurate recommendations. The nearest neighbors algorithm then selects the ten products most similar to those previously viewed by customers. Recommendations are ranked based on customer purchase frequency to prioritize the most popular and relevant items. This method's practical applicability was demonstrated by testing it on a diverse set of products, including jackets from outerwear, baby bodysuits from children's wear, socks from footwear, and sunglasses from the accessories category.
... The architecture network of VGG-19 model[17]. ...
Preprint
Full-text available
In the aftermath of catastrophic natural catastrophes such as earthquakes, tsunamis, and explosions, providing immediate help to key areas can mean the difference between life and death for many individuals. To meet this critical demand, we created a hybrid transportation system that harnesses the power of vgg19 and traditional shortest path algorithms. The objective was to create a real-time system that could address these issues and provide a novel viewpoint. The suggested approach can precisely anticipate damaged roads and steer clear of them when determining the shortest way between sites during emergencies or natural disasters by merging VGG19 with the shortest path algorithm. With the potential to save many lives, this creative strategy can assist emergency responders reach vital places swiftly and effectively while also saving important time and resources. The experimental study shows that the proposed model can achieve robust results. In fact, our solution achieves 98% accuracy rate and a 0.972 G-Mean score on the test set.
... The VGG-19 network contains more parameters (138 million) compared to the VGG-16 network in parallel with its approximate number of layers [21]. ...
Article
Full-text available
Using lung images obtained by computed tomography (CT), this study aims to detect coronavirus (Covid-19) disease with deep learning (DL) techniques. The study included 751 lung CT images from 118 Covid-19 patients and 628 lung CT images from 100 healthy individuals. In total, 70% of the 1379 images were used for training and 30% for testing. In the study, two different methods were proposed on the same dataset. In the first method, the images were trained on AlexNet, VGG-16, VGG-19, GoogleNet and a proposed network. The performance metrics obtained from the five networks were compared and it was observed that the proposed network achieved the highest accuracy value with 95.61%. In the second method, the images were trained on VGG-16, VGG-19, DenseNet-121, ResNet-50 and MobileNet networks. Among the image features obtained from each of these networks, the best 1000 features were selected by Principal Component Analysis (PCA). The best 1000 features were classified with Random Forest (RF) and Support Vector Machines (SVM). According to the classification results, the best 1000 features selected from the features extracted by the VGG-16 and MobileNet networks were obtained with the highest accuracy rate of 93.94% using SVM. It is thought that this study can be a helpful tool in the diagnosis of Covid-19 disease while reducing time and labor costs with the use of artificial intelligence (AI).
... Deep Convolutional Neural Networks (DCNNs), deep learning technique have the ability to automatically learn features through several network layers from large labelled datasets. In medical image analysis, DCNNs have been successfully utilized for various tasks such as skin lesion classification [2], suspicious region detection [3] classification of diabetic retinopathy on fundoscopic images [4], and automatic classification of breast cancer histopathological images [5]. ...
Article
Full-text available
The recent release of large amounts of Chest radiographs (CXR) has prompted the research of automated analysis of Chest X-rays to improve health care services. DCNNs are well suited for image classification because they can learn to extract features from images that are relevant to the task at hand. However, class imbalance is a common problem in chest X-ray imaging, where the number of samples for some disease category is much lower than the number of samples in other categories. This can occur as a result of rarity of some diseases being studied or the fact that only a subset of patients with a particular disease may undergo imaging. Class imbalance can make it difficult for Deep Convolutional Neural networks (DCNNs) to learn and make accurate predictions on the minority classes. Obtaining more data for minority groups is not feasible in medical research. Therefore, there is a need for a suitable method that can address class imbalance. To address class imbalance in DCNNs, this study proposes, Deep Convolutional Neural Networks with Augmentation. The results show that data augmentation can be applied to imbalanced dataset to increase the representation of the minority class by generating new images that are a slight variation of the original CXR images. This study further evaluates identifiability and consistency of the proposed model.
... The output is generated using the softmax activation function and 1000 fully connected layers [13]. In Fig. 4 [14], the overall architecture of Vgg19 is displayed. ...
Article
Full-text available
Since many years ago, walnuts have been extensively available around the world and come in various quality varieties. The proper variety of walnut can be grown in the right area and is vital to human health. This fruit's production is time-consuming and expensive. However, even specialists find it challenging to differentiate distinct kinds since walnut leaves are so similar in color and feel. There aren't many studies on the classification of walnut leaves in the literature, and the most of them were conducted in laboratories. The classification process can now be carried out automatically from leaf photos thanks to technological advancements. The walnut data set was applied to the suggested deep learning model. There aren't many studies on the classification of walnut leaves in the literature, and the most of them were conducted in laboratories. The walnut data set, which consists of 18 different types of 1751 photos, was used to test the suggested deep learning model. The three most successful algorithms among the commonly utilized CNN algorithms in the literature were first selected for the suggested model. From the Vgg16, Vgg19, and AlexNet CNN algorithms, many features were retrieved. Utilizing the Whale Optimization Algorithm (WOA), a new feature set was produced by choosing the top extracted features. KNN is used to categorize this feature set. An accuracy rating of 92.59% was attained as a consequence of the tests.
... To verify the superiority of the proposed method in terms of feature extraction and diagnostic performance, we compared it with traditional machine learning methods and state-of-theart deep learning methods. The compared methods include shallow CNN, VGG19 [37], GoogLeNet [38] and DenseNet [39]. To realize zero-shot compound mechanical fault diagnosis using these methods, we retrained these models respectively from scratch using different data input methods of CB vibration signals shown in table 1, and constructed fault ALs similar to our proposed method. ...
Article
Full-text available
Diagnosis of compound mechanical faults for power circuit breakers (CBs) is a challenging task. In traditional fault diagnosis methods, however, all fault types need to be collected in advance for the training of diagnosis model. Such processes have poor generalization capabilities for industrial scenarios with no or few data when faced with new faults. In this study, we propose a novel zero-shot learning method named DSR-AL to address this problem. An unsupervised neural network, namely, depthwise separable residual convolutional neural network (DSRCNN), is designed to directly learn features from 3D time-frequency images of CB vibration signals. Then we build fault attribute learners (ALs), for transferring fault knowledge to the target faults. Finally, the ALs are used to predict the attribute vector of the target faults, thus realizing the recognition of previously unseen faults. The orthogonal experiments are designed and conducted on real industrial switchgear to validate the effectiveness of the proposed diagnosis framework. Results show that it is feasible to diagnose target faults without using their samples for training, which greatly saves the costs of collecting fault samples. This will help to accurately identify the various faults that may occur during CB's life cycle, and facilitate the application of intelligent fault diagnosis system.
... The VGG19 network (Figure7) [26] has 19 layers, including 16 convolutional layers, 3 fully connected layers, 5 MaxPooling layers, and 1 SoftMax layer. It accepts an input image of size 224×224 into the network. ...
Article
Skin cancer is a medical condition characterized by abnormal growth of skin cells. This occurs when the DNA within these skin cells becomes damaged. In addition, it is a prevalent form of cancer that can result in fatalities if not identified in its early stages. A skin biopsy is a necessary step in determining the presence of skin cancer. However, this procedure requires time and expertise. In recent times, artificial intelligence and deep learning algorithms have exhibited superior performance compared with humans in visual tasks. This result can be attributed to improved processing capabilities and the availability of vast datasets. Automated classification driven by these advancements has the potential to facilitate the early identification of skin cancer. Traditional diagnostic methods might overlook certain cases, whereas artificial intelligence-powered approaches offer a broader perspective. Transfer learning is a widely used technique in deep learning, involving the use of pre-trained models. These models are extensively implemented in healthcare, especially in diagnosing and studying skin lesions. Similarly, convolutional neural networks (CNNs) have recently established themselves as highly robust autonomous feature extractors that can achieve excellent accuracy in skin cancer detection because of their high potential. The primary goal of this study was to build deep-learning models designed to perform binary classification of skin cancer into benign and malignant categories. The tasks to resolve are as follows: partitioning the database, allocating 80% of the images to the training set, assigning the remaining 20% to the test set, and applying a preprocessing procedure to the images, aiming to optimize their suitability for our analysis. This involved augmenting the dataset and resizing the images to align them with the specific requirements of each model used in our research; finally, building deep learning models to enable them to perform the classification task. The methods used are a CNNs model and two transfer learning models, i.e., Visual Geometry Group 16 (VGG16) and Visual Geometry Group 19 (VGG19). They are applied to dermoscopic images from the International Skin Image Collaboration Archive (ISIC) dataset to classify skin lesions into two classes and to conduct a comparative analysis. Our results indicated that the VGG16 model outperformed the others, achieving an accuracy of 87% and a loss of 38%. Additionally, the VGG16 model demonstrated the best recall, precision, and F1- score. Comparatively, the VGG16 and VGG19 models displayed superior performance in this classification task compared with the CNN model. Conclusions. The significance of this study stems from the fact that deep learning-based clinical decision support systems have proven to be highly beneficial, offering valuable recommendations to dermatologists during their diagnostic procedures.
... The VGG-19 model has 138 million parameters and ranks 2nd in classification and 1st in localization at ILSVRC 2014. The VGG-19 model can train more than 1 million images and can classify images into 1000 types of objects [20], [21]. Research conducted by Showmick Guha Paul and colleagues using the VGG-19 model obtained an accuracy value of 95% [22]. ...
Article
Full-text available
Identify banana plant diseases using machine learning with the CNN method to make it easier to identify diseases in banana plants through leaf images. It employs the CNN method, incorporating ResNet50 because ResNet50 is one of the best models and a suitable model for the dataset used, and the VGG-19 model is used because VGG-19 was one of the winning models of the 2014 ImageNet Challenge and is a model that also fits the dataset used. The research objectives encompass dataset processing, model architecture development, evaluation, and result reporting, all aimed at enhancing disease identification in banana plants. The ResNet50 model achieved an impressive 94% accuracy, with 88% precision, 91% recall, and an 89% F1-score, while the VGG-19 model demonstrated strong performance with 91% accuracy, surpassing prior research and highlighting the effectiveness of these models in identifying banana plant diseases through leaf images. In conclusion, the ResNet50 model's exceptional accuracy positions it as the preferred model for CNN-based disease identification in banana plants, offering significant advancements and insights for agricultural practices. Future research opportunities include exploring alternative CNN models, architectural variations, and more extensive training datasets to enhance disease identification accuracy.