Figure 4 - uploaded by Sarinder Kaur Dhillon
Content may be subject to copyright.
Proposed attention-VGG16 network. The extracted features from convolutional layer 13 were input to the attention block through a residual connection. The other input to the attention block was the feature maps from the 18th convolutional layer.
Source publication
The reliable classification of benign and malignant lesions in breast ultrasound images can provide an effective and relatively low-cost method for the early diagnosis of breast cancer. The accuracy of the diagnosis is, however, highly dependent on the quality of the ultrasound systems and the experience of the users (radiologists). The use of deep...
Contexts in source publication
Context 1
... this study, we used a trained VGG16 network to extract relevant features from the ultrasound datasets. Figure 4 presents the schematic of the proposed network architecture. The input image size for the VGG16 network was 128 × 128 × 3. ...
Similar publications
One of the most prevalent diseases affecting women in recent years is breast cancer. Early breast cancer detection can help in the treatment, lower the infection risk, and worsen the results. This paper presents a hybrid approach for augmentation and segmenting breast cancer. The framework contains two main stages: augmentation and segmentation of...
Citations
... Representatively, Daoud et al. [6] performed classification by training the B-mode US dataset through a Visual Geometry Group (VGG)-19 network [11] while pre-training with ImageNet. To increase the accuracy of the B-mode classification, Kalafi et al. [9] trained the VGG network through two losses: cross entropy (CE) and log hyperbolic cosine (Log-Cosh). Jabeen et al. [8] performed feature extraction using pre-trained DarkNet [12] and then selected the best features through two different optimizations: differential evolution and binary gray golf. ...
... A complex ensemble loss function with binary cross-entropy loss and the logarithm of hyperbolic cosine loss was also introduced. This combination aimed to improve the distinction between labels and lesion classifications, resulting in a more detailed classification process [32]. ...
... Employing a multi-layer perceptron while learning features through subtraction hinders the channel's independence and has a detrimental effect on learning since it necessitates more parameters. The spatial pyramid pooling employed in visual task detection networks [32] can also be used for census map features from differently sized maps to sharpen tasks and keep precise spatial information. In achieving this goal, we design the attention mechanism and propose the McSCAM framework that considers multi-scale ideas. ...
Cancer-related diseases are some of the major health hazards affecting individuals globally, especially breast cancer. Cases of breast cancer among women persist, and the early indicators of the diseases go unnoticed in many cases. Breast cancer can therefore be treated effectively if the detection is correctly conducted, and the cancer is classified at the preliminary stages. Yet, direct mammogram and ultrasound image diagnosis is a very intricate, time-consuming process, which can be best accomplished with the help of a professional. Manual diagnosis based on mammogram images can be cumbersome, and this often requires the input of professionals. Despite various AI-based strategies in the literature, similarity in cancer and non-cancer regions, irrelevant feature extraction, and poorly trained models are persistent problems. This paper presents a new Multi-Feature Attention Network (MFAN) for breast cancer classification that works well for small lesions and similar contexts. MFAN has two important modules: the McSCAM and the GLAM for Feature Fusion. During channel fusion, McSCAM can preserve the spatial characteristics and extract high-order statistical information, while the GLAM helps reduce the scale differences among the fused features. The global and local attention branches also help the network to effectively identify small lesion regions by obtaining global and local information. Based on the experimental results, the proposed MFAN is a powerful classification model that can classify breast cancer subtypes while providing a solution to the current problems in breast cancer diagnosis on two public datasets.
... CNNs comprise convolution layers responsible for feature extraction, pooling layers to minimize the feature map, and fully connected layers for final classification. 9 Despite previous endeavors aimed at the development of reliable CAD systems through the utilization of diverse CNN architectures such as ResNet, 5,7,10,11 VGG, 10,12 DenseNet, 13 DarkNet, 6 MobileNet, 14 as delineated in Table 1, the pursuit of perfection persists. Specifically, there remains a need for further enhancements in accuracy and computational efficiency to facilitate the creation of an even more precise and reliable CAD system for the early detection of BrC, while minimizing the necessity for human intervention. ...
... The obtained accuracy on the BUSI dataset using the presented approach is 91.5%, respectively. Kalafi et al. 12 presented a new framework for efficient breast mass detection by modifying VGG16 network through addition of attention module. The attention module increases the feature insight between targeted and background masses in BU images. ...
Breast cancer (BrC) is one of the most common causes of death among women worldwide. Images of the breast (mammography or ultrasound) may show an anomaly that represents early indicators of BrC. However, accurate breast image interpretation necessitates labor-intensive procedures and highly skilled medical professionals. As a second opinion for the physician, deep learning (DL) tools can be useful for the diagnosis and classification of malignant and benign lesions. However, due to the lack of interpretability of DL algorithms, it is not easy to understand by experts as to how to predict a label. In this work, we proposed multitask U-Net Saliency estimation and DL model-based breast lesion segmentation and classification using ultrasound images. A new contrast enhancement technique is proposed to improve the quality of original images. After that, a new technique was proposed called UNET-Saliency map for the segmentation of breast lesions. Simultaneously, a MobileNetV2 deep model is fine-tuned with additional residual blocks and trained from scratch using original and enhanced images. The purpose of additional blocks is to reduce the number of parameters and better learning of ultrasound images. Training is performed from scratch and extracted features from the deeper layers of both models. In the later step, a new cross-entropy controlled sine-cosine algorithm is developed and selected best features. The main purpose of this step is the reduction of irrelevant features for the classification phase. The selected features are fused in the next step by employing a serial-based Manhattan Distance (SbMD) approach and classified the resultant vector using machine learning classifiers. The results indicate that a wide neural network (W-NN) obtained the highest accuracy of 98.9% and sensitivity rate of 98.70% on the selected breast ultrasound image dataset. The comparison of the proposed method accuracy is conducted with state-of-the-art (SoArt) techniques which show the improved performance.
... They concluded that the explainable method can overcome the risks by enhancing the transparency of the black-box AI decision-making process. Kalafi et al. [28] created a novel classification scheme for breast cancer lesions using a modified VGG16 architecture and an attention module. The chosen attention method improved the ultrasonic feature distinction between background and targeted lesions. ...
Breast cancer is one of the major causes of deaths in women. However, the early diagnosis is important for screening and control the mortality rate. Thus for the diagnosis of breast cancer at the early stage, a computer‐aided diagnosis system is highly required. Ultrasound is an important examination technique for breast cancer diagnosis due to its low cost. Recently, many learning‐based techniques have been introduced to classify breast cancer using breast ultrasound imaging dataset (BUSI) datasets; however, the manual handling is not an easy process and time consuming. The authors propose an EfficientNet‐integrated ResNet deep network and XAI‐based framework for accurately classifying breast cancer (malignant and benign). In the initial step, data augmentation is performed to increase the number of training samples. For this purpose, three‐pixel flip mathematical equations are introduced: horizontal, vertical, and 90°. Later, two pre‐trained deep learning models were employed, skipped some layers, and fine‐tuned. Both fine‐tuned models are later trained using a deep transfer learning process and extracted features from the deeper layer. Explainable artificial intelligence‐based analysed the performance of trained models. After that, a new feature selection technique is proposed based on the cuckoo search algorithm called cuckoo search controlled standard error mean. This technique selects the best features and fuses using a new parallel zero‐padding maximum correlated coefficient features. In the end, the selection algorithm is applied again to the fused feature vector and classified using machine learning algorithms. The experimental process of the proposed framework is conducted on a publicly available BUSI and obtained 98.4% and 98% accuracy in two different experiments. Comparing the proposed framework is also conducted with recent techniques and shows improved accuracy. In addition, the proposed framework was executed less than the original deep learning models.
... With MGTO optimization, the CNN model achieved a remarkable accuracy of 93.13% in just 10 iterations. Kalafi et al. 21 employed CNN models to classify benign and malignant breast lesions from two ultrasound imaging datasets. The proposed model with VGG16 features and a fully connected network of only 10 neurons achieved the best performance in the classification task, with 92% and 93% accuracy, respectively. ...
Accurate and unbiased classification of breast lesions is pivotal for early diagnosis and treatment, and a deep learning approach can effectively represent and utilize the digital content of images for more precise medical image analysis. Breast ultrasound imaging is useful for detecting and distinguishing benign masses from malignant masses. Based on the different ways in which benign and malignant tumors affect neighboring tissues, i.e., the pattern of growth and border irregularities, the penetration degree of the adjacent tissue, and tissue-level changes, we investigated the relationship between breast cancer imaging features and the roles of inter- and extra-lesional tissues and their impact on refining the performance of deep learning classification. The novelty of the proposed approach lies in considering the features extracted from the tissue inside the tumor (by performing an erosion operation) and from the lesion and surrounding tissue (by performing a dilation operation) for classification. This study uses these new features and three pre-trained deep neuronal networks to address the challenge of breast lesion classification in ultrasound images. To improve the classification accuracy and interpretability of the model, the proposed model leverages transfer learning to accelerate the training process. Three modern pre-trained CNN architectures (MobileNetV2, VGG16, and EfficientNetB7) are used for transfer learning and fine-tuning for optimization. There are concerns related to the neuronal networks producing erroneous outputs in the presence of noisy images, variations in input data, or adversarial attacks; thus, the proposed system uses the BUS-BRA database (two classes/benign and malignant) for training and testing and the unseen BUSI database (two classes/benign and malignant) for testing. Extensive experiments have recorded accuracy and AUC as performance parameters. The results indicate that the proposed system outperforms the existing breast cancer detection algorithms reported in the literature. AUC values of 1.00 are calculated for VGG16 and EfficientNet-B7 in the dilation cases. The proposed approach will facilitate this challenging and time-consuming classification task.
... However, traditional cropping methods, such as centre cropping and random cropping, are not suitable because they can further reduce image size. Additionally, the discriminative features necessary for detecting cancer cells are often concentrated in the centre areas of images in some datasets, and traditional cropping methods may result in the loss and incompleteness of these critical areas [9,10,11,12]. A 96% recognition rate was achieved for breast cancer classification using Deep Convolutional Neural Networks (DCNN), while Least Square Support Vector Machines (LSSVM) achieved around 98.53%. ...
... In [11], the BYOL algorithm enhanced mammogram image processing for diagnostic purposes, but challenges included inefficiencies in manual labelling and poor performance of pre-trained CNNs, as well as data bias in contrastive methods sensitive to image augmentations. In [12], the choice of loss function was highlighted as crucial, with an attention block enhancing model performance. The proposed model achieved 93% accuracy, outperforming other VGG16 architectures. ...
Early detection of breast cancer can significantly reduce its high mortality rate. However, manual diagnosis from mammography images always requires a skilled professional. Various researchers have developed artificial intelligence-based methods to address this challenge, yet they encounter several issues such as overlapping cancerous and noncancerous regions, extraction of irrelevant features, and inadequate training models. In this paper, we propose a novel computationally automated biological mechanism for categorizing breast cancer. This approach leverages a new optimization technique based on the Moth Flame Optimization (MFO) algorithm, which enhances the classification of breast cancer cases. The proposed framework consists of two main stages: Utilizing Deep Neural Networks (DNN) based on transfer learning. Implementing a Convolutional Neural Network (DCNN) optimized with MFO. By integrating transfer learning with an optimized DCNN for classification, our method significantly improves accuracy compared to recent approaches. The proposed framework was evaluated using publicly available datasets, achieving an average classification accuracy of 97.95%. To ensure the statistical significance and robustness of our results, we conducted tests that demonstrated the effectiveness and superior performance of our methodology compared to current methods.
... These CNN-based methods extract convolutional features through transfer learning, leveraging pre-trained architectures such as VGG16, YOLOv3, or GoogLeNet. Notable studies [Ald19, Chi19,Han17,Kal21] have demonstrated the efficacy of this approach in accurately classifying breast tumors based solely on learned features. However, deep learning methods come with certain trade-offs. ...
Breast cancer, a prevalent disease among women, demands early detection for better clinical outcomes. While mammography is widely used for breast cancer screening, its limitation in e.g., dense breast tissue necessitates additional diagnostic tools. Ultrasound breast imaging provides valuable tumor information (features) which are used for standardized reporting, aiding in the screening process and precise biopsy targeting. Previous studies have demonstrated that the classification of regions of interest (ROIs), including only the lesion, outperforms whole image classification. Therefore, our objective is to identify essential lesion features within such ROIs, which are sufficient for accurate tumor classification, enhancing the robustness of diagnostic image acquisition. For our experiments, we employ convolutional neural networks (CNNs) to first segment suspicious lesions' ROIs. In a second step, we generate different ROI subregions: top/bottom half, horizontal subslices and ROIs with cropped-out center areas. Subsequently these ROI subregions are classified into benign vs. malignant lesions with a second CNN. Our results indicate that outermost ROI subslices perform better than inner ones, likely due to increased contour visibility. Removing the inner 66% of the ROI did not significantly impact classification outcomes (p = 0.35). Classifying half ROIs did not negatively impact accuracy compared to whole ROIs, with bottom ROI performing slightly better than top ROI, despite significantly lower image contrast in that region. Therefore, even visually less favorable images can be reliably analyzed when the lesion's contour is depicted. In conclusion, our study underscores the importance of understanding tumor features in ultrasound imaging, supporting enhanced diagnostic approaches to improve breast cancer detection and management.
... These techniques have demonstrated notable achievements. For example, several convolutional neural network (CNN) architectures, such as VGG, were utilized to identify breast cancers (Kalafi et al., 2021), (Luo et al., 2022). CNNs are preferred in this context because to their ability to acquire strong feature representations from breast ultrasound (BUS) images. ...
... A plethora of deep learning-based methodologies have been devised for the purpose of categorizing breast tumors into benign and malignant classifications. In their study, (Kalafi et al., 2021) implemented a modification to the VGG16 network by incorporating an attention mechanism. This modification aimed to enhance the network's ability to extract pertinent characteristics and emphasize crucial pixel information pertaining to the target tumor in ultrasound images, while distinguishing it from the backdrop. ...
Breast cancer is a prominent contributor to mortality associated with cancer in the female population on a global scale. The timely identification and precise categorization of breast cancer are of utmost importance in enhancing patient prognosis. Nevertheless, the task of precisely categorizing breast cancer based on ultrasound imaging continues to present difficulties, primarily due to the presence of dense breast tissues and their inherent heterogeneity. This study presents a unique approach for breast cancer categorization utilizing the wavelet based vision transformer network. To enhance the neural network’s receptive fields, we have incorporated the discrete wavelet transform (DWT) into the network input. This technique enables the capture of significant features in the frequency domain. The proposed model exhibits the capability to effectively capture intricate characteristics of breast tissue, hence enabling correct classification of breast cancer with a notable degree of precision and efficiency. We utilized two breast tumor ultrasound datasets, including 780 cases from Baheya hospital in Egypt and 267 patients from the UDIAT Diagnostic Centre of Sabadell in Spain. The findings of our study indicate that the proposed transformer network achieves exceptional performance in breast cancerclassification. With an AUC rate of 0.984 and 0.968 on both datasets, our approach surpasses conventional deep learning techniques, establishing itself as the leading method in this domain. This study signifies a noteworthy advancement in the diagnosis and categorization of breast cancer, showcasing the potential of the proposed transformer networks to enhance the efficacy of medical imaging analysis.
... With various artifact removal, noise reduction, enhancement procedures, removing undesirable regions can improve the quality Montaha et al. 10 Customized VGG16 achieves 98.02% accuracy in breast cancer classification. 10 Attention mechanism in an improved VGG16 architecture combining binary crossentropy with logarithm of the hyperbolic cosine loss achieves 93.00% accuracy Kalafi et al. 11 Multilevel transfer learning, feature extraction-based transfer learning , and GLCM-based feature extraction are some of the frameworks which can improve the performance of breast cancer classification (Table 1). Hybrid architectures such as combining VGGNet, ResNet, and DenseNet models were used to develop a robust ensemble architecture for breast cancer classification. ...
... Here, V i ∈ V to denote a node and e ij = (V i , V j ) ∈ E to denote an edge pointing from V i , V j . The neighborhood of a node V is defined as 11 For the graph generation, the seven handcrafted features are utilized. Each feature is represented as a node and the set of the nodes is represented as {V} in the graph. ...
... Notably, our approach outperforms prior works on the same dataset (BUSI), emphasizing the effectiveness of the GCN model in enhancing the accuracy of breast cancer classification from ultrasound images. 9 Montaha et al., 10 and Kalafi et al. 11 all present innovative approaches to the classification of breast cancer. Zhang et al. 9 utilized a variety of architectures, with InceptionV3 being the most successful, to obtain an accuracy of 91.0% while concentrating on transfer learning techniques. ...
Objective
Early diagnosis of breast cancer can lead to effective treatment, possibly increase long-term survival rates, and improve quality of life. The objective of this study is to present an automated analysis and classification system for breast cancer using clinical markers such as tumor shape, orientation, margin, and surrounding tissue. The novelty and uniqueness of the study lie in the approach of considering medical features based on the diagnosis of radiologists.
Methods
Using clinical markers, a graph is generated where each feature is represented by a node, and the connection between them is represented by an edge which is derived through Pearson's correlation method. A graph convolutional network (GCN) model is proposed to classify breast tumors into benign and malignant, using the graph data. Several statistical tests are performed to assess the importance of the proposed features. The performance of the proposed GCN model is improved by experimenting with different layer configurations and hyper-parameter settings.
Results
Results show that the proposed model has a 98.73% test accuracy. The performance of the model is compared with a graph attention network, a one-dimensional convolutional neural network, and five transfer learning models, ten machine learning models, and three ensemble learning models. The performance of the model was further assessed with three supplementary breast cancer ultrasound image datasets, where the accuracies are 91.03%, 94.37%, and 89.62% for Dataset A, Dataset B, and Dataset C (combining Dataset A and Dataset B) respectively. Overfitting issues are assessed through k-fold cross-validation.
Conclusion
Several variants are utilized to present a more rigorous and fair evaluation of our work, especially the importance of extracting clinically relevant features. Moreover, a GCN model using graph data can be a promising solution for an automated feature-based breast image classification system.
... Recent advances in deep learning-based approaches, including convolutional neural networks (CNNs) and vision-Transformers, have achieved remarkable successes in the field of medical imaging [4,5]. For instance, [6,7] employed various CNN architectures like VGG to classify breast tumors, as CNNs can learn robust feature representations of BUS images. Although some CNN-based solutions achieved a classification accuracy close to 90%, they have limitations because CNNs handle the long-range dependencies by expanding the convolution kernel size while slowing down the system speed and enhancing the feature representation. ...
... Numerous deep learning-based solutions have been developed to classify breast masses into benign and malignant categories. Kalafi et al. [6] modified the VGG16 network with an attention mechanism to extract relevant features and highlight important pixel information between the background and the target tumor with ultrasound. The authors combined two loss functions, including binary cross-entropy and the logarithm of the hyperbolic cosine loss. ...
Automated classification of tumors in breast ultrasound (BUS) using deep learning-based methods has achieved remarkable success recently. The initial performance of the individual convolutional neural networks (CNNs) and vision-Transformer have been encouraging. However, artifacts like shadows, low contrast, speckle noise, and variations in tumor shape and sizes impose a kind of uncertainty that limits the performance of existing methods. The interpretation of classification results with a high confidence rate is strongly required in a real clinical setting. This work proposes an efficient method for predicting breast tumor malignancy (EPTM) in BUS images. EPTM comprises heterogeneous deep learning-based feature extraction models and a Choquet integral-based fusion mechanism. Specifically, EPTM incorporates efficient CNN and vision-Transformer methods to build accurate individual breast tumor malignancy prediction models (IMMs). The heterogeneous IMMs ensure complete feature representation of the breast tumors as each CNN or vision-Transformer captures different textural patterns in BUS images. EPTM applies learnable fuzzy measure-based Choquet integral to fuse the predictions of IMMs, where the gray wolf optimizer is employed for generating fuzzy measures. The experimental results confirm the versatility of the EPTM method evaluated on two publicly available datasets, namely UDIAT BUS and Baheya Hospital. EPTM yielded state-of-the-art results with the area under the curve of 0.932 and 0.98, respectively, in the UDIAT BUS and Baheya Hospital datasets. The source code of the proposed model is publicly available at https://github.com/vivek231/breastUS-classification.