Article

The effectiveness of deep learning vs. traditional methods for lung disease diagnosis using chest X-ray images: A systematic review

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Recent literature highlights the advantages of ensemble learning, particularly stacking, in enhancing robustness and predictive accuracy [19,26,28,[36][37][38]. Our meta-model aligns with these findings, effectively integrating multiple base models to outperform individual networks. ...
Article
Full-text available
Citation: Nakrani, H.; Shahra, E.Q.; Basurra, S.; Mohammad, R.; Vakaj, E.; Jabbar, W.A. Advanced Diagnosis of Cardiac and Respiratory Diseases from Chest X-Ray Imagery Using Deep Learning Ensembles. J. Sens. Actuator Netw. 2025, 14, 44. https://doi. Abstract: Chest X-ray interpretation is essential for diagnosing cardiac and respiratory diseases. This study introduces a deep learning ensemble approach that integrates Con-volutional Neural Networks (CNNs), including ResNet-152, VGG19, EfficientNet, and a Vision Transformer (ViT), to enhance diagnostic accuracy. Using the NIH Chest X-ray dataset, the methodology involved comprehensive preprocessing, data augmentation, and model optimization techniques to address challenges such as label imbalance and feature variability. Among the individual models, VGG19 exhibited strong performance with a Hamming Loss of 0.1335 and high accuracy in detecting Edema, while ViT excelled in classifying certain conditions like Hernia. Despite the strengths of individual models, the ensemble meta-model achieved the best overall performance, with a Hamming Loss of 0.1408 and consistently higher ROC-AUC values across multiple diseases, demonstrating its superior capability to handle complex classification tasks. This robust ensemble learning framework underscores its potential for reliable and precise disease detection, offering significant improvements over traditional methods. The findings highlight the value of integrating diverse model architectures to address the complexities of multi-label chest X-ray classification, providing a pathway for more accurate, scalable, and accessible diagnostic tools in clinical practice.
... Radiologists' workload has significantly increased over the past three decades, potentially impacting the accuracy of radiologic diagnoses [10]. In response, numerous studies have explored the use of deep learning models to improve diagnostic accuracy and reduce the burden on radiologists [11]. Building on this line of research, our study employed the latest technology, a multimodal LLM, to attempt radiologic report generation for CXRs. ...
Article
Full-text available
Objective This study aimed to develop an open-source multimodal large language model (CXR-LLaVA) for interpreting chest X-ray images (CXRs), leveraging recent advances in large language models (LLMs) to potentially replicate the image interpretation skills of human radiologists. Materials and methods For training, we collected 592,580 publicly available CXRs, of which 374,881 had labels for certain radiographic abnormalities (Dataset 1) and 217,699 provided free-text radiology reports (Dataset 2). After pre-training a vision transformer with Dataset 1, we integrated it with an LLM influenced by the LLaVA network. Then, the model was fine-tuned, primarily using Dataset 2. The model’s diagnostic performance for major pathological findings was evaluated, along with the acceptability of radiologic reports by human radiologists, to gauge its potential for autonomous reporting. Results The model demonstrated impressive performance in test sets, achieving an average F1 score of 0.81 for six major pathological findings in the MIMIC internal test set and 0.56 for six major pathological findings in the external test set. The model’s F1 scores surpassed those of GPT-4-vision and Gemini-Pro-Vision in both test sets. In human radiologist evaluations of the external test set, the model achieved a 72.7% success rate in autonomous reporting, slightly below the 84.0% rate of ground truth reports. Conclusion This study highlights the significant potential of multimodal LLMs for CXR interpretation, while also acknowledging the performance limitations. Despite these challenges, we believe that making our model open-source will catalyze further research, expanding its effectiveness and applicability in various clinical contexts. Key Points Question How can a multimodal large language model be adapted to interpret chest X-rays and generate radiologic reports? Findings The developed CXR-LLaVA model effectively detects major pathological findings in chest X-rays and generates radiologic reports with a higher accuracy compared to general-purpose models. Clinical relevance This study demonstrates the potential of multimodal large language models to support radiologists by autonomously generating chest X-ray reports, potentially reducing diagnostic workloads and improving radiologist efficiency.
... Challenges include model training difficulties, precision issues for lung segmentation, and the lack of un-annotated X-rays. The study [18] explores the use of deep learning architectures in lung disease diagnosis using CXR images, analyzing 129 articles and finding pre-trained networks enhance sensitivity and accuracy, while also discussing limitations and future research opportunities. Agrawal et al. [19] explores the use of deep learning in chest radiography for lung segmentation and detection using publicly available datasets, including Generative Adversarial Network models, to address medical data scarcity. ...
Article
Full-text available
The limitation of feature selection is the biggest challenge for machine learning classifiers in disease classification. This research proposes a novel feature extraction method to extract representative features from medical images, combining extracted features with original image pixel features. Additionally, we propose a new method that uses data values from Andrews's curve function to transform chest x-ray images into spectrograms. The spectrogram images are believed to aid in distinguishing near-similar medical images, such as COVID and pneumonia. The study aims to build an efficient machine learning system that applies the proposed feature extraction method and utilizes spectrogram images for distinguishing near-similar medical images. For experimental analysis, we have used the award winning Kaggle Chest Radiography image dataset. The test results show that among all machine learning classifiers, the logistic regression classifier could correctly distinguish COVID and pneumonia images with a 97.18% test accuracy, a 98.34% detection rate, a 97.8% precision rate, and an AUC value of 0.99 on the test dataset. The machine learning model has learned to distinguish between medical images that appear similar using features found through the proposed feature extraction and spectrogram images. The results also proved that the proposed approach using XGBoost has outperformed state-of-the-art models in recent research studies when (i) binary classification is performed using COVID-19 and Normal Chest x-ray images and (ii) multiclass classification is performed using Normal, COVID and Pneumonia Chest x-ray images.
... More recently, machine learning (ML) approaches have been applied to radiographic reconstruction. Many of these ML architectures have outperformed IR methods by a large margin at a specified degradation level [38,39]. Similar to traditional non-ML-based algorithms, ML-based approaches experience difficulty with complex noise fields. ...
Article
Full-text available
We develop an ML-based approach for density reconstruction based on transformer neural networks. This approach is demonstrated in the setting of ICF-like double shell hydrodynamic simulations wherein the parameters related to material properties and initial conditions are varied. The new method can robustly recover the complex topologies given by the Richtmyer-Meshkoff instability (RMI) from a sequence of hydrodynamic features derived from radiographic images corrupted with blur, scatter, and noise. A noise model is developed to characterize errors in extracting features from synthetic radiographs of the simulated density field. The key component of the network is a transformer encoder that acts on a sequence of features extracted from noisy radiographs. This encoder includes numerous self-attention layers that act to learn temporal dependencies in the input sequences and increase the expressiveness of the model. This approach is shown to exhibit an excellent ability to accurately recover the RMI growth rates, despite the gas-metal interface being greatly obscured by radiographic noise. Our approach can be applied in a broad array of fields involving shock physics and material science.
... screening of chest diseases, [3]. Given that the chest contains vital organs and is prone to a wide range of diseases, it is common for CXR images to indicate the presence of multiple health-related conditions, especially during a patient's initial visit when the specific type of disease may not yet be determined. ...
Article
Full-text available
Chest X-ray is one of the most widely used methods for clinical diagnosis of chest diseases. In recent years, the development of deep learning technologies has driven progress in chest disease detection, but existing methods still face numerous challenges. Current research primarily focuses on detecting specific chest diseases. However, when chest X-ray images indicate multiple diseases, the diverse and complex characteristics of different disease types make it challenging to extract effective information. Additionally, the detection accuracy of small lesions remains low, which lessens the overall lesion recognition rate. To address these issues, a novel network, named YOLO-CXR, is proposed in this paper for multiple disease detection, which able to effectively locate multiple small lesions in chest X-ray images. First, the proposed network enhances the YOLOv8s backbone by replacing the ordinary convolutional layers with RefConv layers to improve its feature extraction capabilities w.r.t. various diseases. Second, it utilizes a new, Efficient Channel and Local Attention (ELCA) mechanism to increase its sensitivity to the spatial location information of different lesions. Third, to enhance its detection of small lesions, YOLO-CXR incorporates a dedicated small-lesion detection head and the Selective Feature Fusion (SFF) technique. Due to these improvements, the proposed network significantly enhances its detection of lesions at different scales and multiple small lesions in particular. Experiments conducted on the publicly available VinDr-CXR dataset demonstrate that YOLO-CXR achieves an mAP@0.5 of 0.338, an mAP@[0.5:0.95:0.05] of 0.167, and recall of 0.365, outperforming all state-of-the-art networks considered.
... Our study offers a distinctive contribution by focusing on the integration and comparison of multi-task and single-task classification and segmentation models specifically for lung infectious diseases. Unlike existing reviews [19][20][21][22][23][24], which primarily concentrate on disease detection methods and specific models, our work provides a detailed comparative analysis of these architectures. This evaluation highlights the performance differences between multi-task and single-task models, which has not been comprehensively addressed in previous reviews. ...
Article
Full-text available
This research investigates advanced approaches in medical image analysis, specifically focusing on segmentation and classification techniques, as well as their integration into multi‐task architectures for lung infections. This research begins by explaining key architectural models used in segmentation and classification tasks. The study extends to the enhancement of these architectures through attention modules and conditional random fields. Relevant datasets and evaluation metrics, incorporating discussions on loss functions are also reviewed. This review encompasses recent advancements in single‐task and multi‐task models, highlighting innovations in semi‐supervised, self‐supervised, few‐shot, and zero‐shot learning techniques. Empirical analysis is conducted on both single‐task and multi‐task architectures, predominantly utilizing the U‐Net framework, and is applied across multiple datasets for segmentation and classification tasks. Results demonstrate the effectiveness of these models and provide insights into the strengths and limitations of different approaches. This research contributes to improved detection and diagnosis of lung infections by offering a comprehensive overview of current methodologies and their practical applications.
... However, their effectiveness remains insufficient due to poor data preprocessing and reliance on inappropriate ready-made features that do not align well with the model's requirements, since these ready-made features are often defined manually, which may lead to discarding or modifying important information, thereby losing crucial details that could be useful for learning a specific model. This is a fact that can easily be noticed in the fields of computer vision and image classification, where the performance of convolutional neural network (CNN) models is impressive compared to conventional machine learning models [7,8]. This is because traditional machine learning models are trained from preprocessed datasets whose features are extracted using costly handcrafted feature algorithms. ...
Article
Full-text available
In the last few years, the use of convolutional neural networks (CNNs) in intrusion detection domains has attracted more and more attention. However, their results in this domain have not lived up to expectations compared to the results obtained in other domains, such as image classification and video analysis. This is mainly due to the datasets used, which contain preprocessed features that are not compatible with convolutional neural networks, as they do not allow a full exploit of all the information embedded in the original network traffic. With the aim of overcoming these issues, we propose in this paper a new efficient convolutional neural network model for network intrusion detection based on raw traffic data (pcap files) rather than preprocessed data stored in CSV files. The novelty of this paper lies in the proposal of a new method for adapting the raw network traffic data to the most suitable format for CNN models, which allows us to fully exploit the strengths of CNNs in terms of pattern recognition and spatial analysis, leading to more accurate and effective results. Additionally, to further improve its detection performance, the structure and hyperparameters of our proposed CNN-based model are automatically adjusted using the self-adaptive differential evolution (SADE) metaheuristic, in which symmetry plays an essential role in balancing the different phases of the algorithm, so that each phase can contribute in an equal and efficient way to finding optimal solutions. This helps to make the overall performance more robust and efficient when solving optimization problems. The experimental results on three datasets, KDD-99, UNSW-NB15, and CIC-IDS2017, show a strong symmetry between the frequency values implemented in the images built for each network traffic and the different attack classes. This was confirmed by a good predictive accuracy that goes well beyond similar competing models in the literature.
... More recently, machine learning (ML) approaches have been applied to radiographic reconstruction. Many of these ML architectures have outperformed IR methods by a large margin at a specified degradation level [38,39]. Similar to traditional non-ML-based algorithms, ML-based approaches experience difficulty with complex noise fields. ...
Preprint
Full-text available
A trained attention-based transformer network can robustly recover the complex topologies given by the Richtmyer-Meshkoff instability from a sequence of hydrodynamic features derived from radiographic images corrupted with blur, scatter, and noise. This approach is demonstrated on ICF-like double shell hydrodynamic simulations. The key component of this network is a transformer encoder that acts on a sequence of features extracted from noisy radiographs. This encoder includes numerous self-attention layers that act to learn temporal dependencies in the input sequences and increase the expressiveness of the model. This approach is demonstrated to exhibit an excellent ability to accurately recover the Richtmyer-Meshkov instability growth rates, even despite the gas-metal interface being greatly obscured by radiographic noise.
... The ramifications of delayed diagnoses can be severe, resulting in disease progression, decreased treatment effectiveness, and worsened patient prognosis. Moreover, the accurate categorization of lung diseases from radiological images is a multifaceted and time-consuming task, which places a considerable burden on radiologists and healthcare institutions [4]. Consequently, there exists an immediate need for innovative solutions capable of automating and augmenting the diagnostic process, thereby enabling healthcare providers to deliver more efficient and effective care [5]. ...
Preprint
Full-text available
Lung diseases pose a significant global health challenge, underscoring the critical need for prompt and precise diagnoses to facilitate effective treatments and enhance patient outcomes. In this research paper, we introduce an innovative method for the prediction of lung diseases by harnessing the capabilities of deep learning techniques, thereby streamlining and augmenting the diagnostic process. The Investigation commences by assembling an extensive dataset of chest Xray images sourced from diverse origins, encompassing both normal and diseased instances. Subsequently, we employ a specialized Pix2Pix Generative Adversarial Network architecture tailored for image classification. This network is meticulously trained on the comprehensive dataset, fine-tuning its abilities to discern distinctive features associated with a range of lung diseases, including pneumonia, tuberculosis, and lung cancer. The Empirical findings underscore the efficacy of the approach in diagnosing lung diseases, showcasing notable levels of accuracy, sensitivity, and specificity. Furthermore, we employ interpretability techniques to pinpoint the regions within the X-ray images that significantly contribute to the predictions, bolstering the transparency and credibility of the Model. This research presents a promising avenue for automating the diagnosis of lung diseases, with the potential to reduce human error and enhance patient care significantly. It holds the promise of aiding in early detection and intervention, potentially saving lives and alleviating the burden of lung diseases on healthcare systems worldwide. The integration of DenseNet architecture into our predictive model significantly enhances the accuracy and efficiency of diagnosing lung diseases from X-ray images. DenseNet’s interconnected layers facilitate collaborative learning, enabling the model to discern intricate details crucial for precise classifications. The adoption of DenseNet not only amplifies diagnostic precision but also paves the way for future strides in medical image analysis, particularly in advancing respiratory health diagnostics. Along with this under the investigation comprising the chest X-ray images in JPEG format, it is systematically categorized into train, test, and val directories, each further subdivided into folders representing Pneumonia and Normal classes. These images depict anterior-posterior chest radiographs obtained from pediatric patients aged one to five years at the Guangzhou Women and Children’s Medical Center, forming an integral part of routine clinical care for this specific demographic. To ensure the dataset’s integrity, a rigorous initial screening process is applied to exclude any low-quality or unreadable scans. Subsequently, two expert physicians meticulously evaluate and grade the diagnostic quality of the remaining images. Only those that successfully pass this comprehensive scrutiny are deemed suitable for the subsequent training of the artificial intelligence system. This meticulous curation underscores the reliability and high quality of the dataset, emphasizing its potential for advancing medical image analysis within the context of pediatric chest radiographs.
Article
Medical imaging is critical in modern healthcare for accurately detecting and diagnosing various medical conditions. Advanced computational techniques, particularly preprocessing methods and deep learning models, have demonstrated significant potential for improving medical image analysis. However, determining the optimal combination of these techniques across different types of medical images remains a challenge. Using empirical experiments, this evaluation research investigates the effectiveness of five popular pairs of preprocessing techniques combined with five widely used deep learning models. Preprocessing methods evaluated include CLAHE + Butterworth, DWT + Threshold, CLAHE + median filter, Median-Mean Hybrid Filter, and Unsharp Masking + Bilateral Filter, concatenated with deep learning models: EfficiencyNet-B4, ResNet-50, DenseNet-169, VGG16 and MobileNetV2. The performance of these combinations was evaluated through experiments carried out on eight diverse and commonly used datasets encompassing various medical imaging modalities. These datasets include two X-ray collections: the COVID-19 Pneumonia Normal Chest PA Dataset and the Osteoporosis Knee X-ray Dataset; two CT scan datasets: the Chest CT-Scan Images Dataset and the Brain Stroke CT Image Dataset; two MRI datasets: the Breast Cancer Patients MRI and the Brain Tumor MRI Dataset; and two ultrasound datasets: the Ultrasound Breast Images for Breast Cancer and the MT Small Dataset. Our findings show that the Median-Mean Hybrid Filter and Unsharp Masking + Bilateral Filter are the most effective preprocessing methods, achieving an efficiency rate of 87.5%. Among the deep learning models, EfficiencyNet-B4 and MobileNetV2 are the highest performing models with an efficiency ratio of 75%, with MobileNetV2 providing up to 34% shorter runtime compared to other models. This study provides a thorough evaluation of the performance of different preprocessing methods and deep learning algorithms across commonly used medical imaging modalities. Presenting empirical results from our experiments offers practical insights into choosing the most suitable preprocessing techniques and deep learning models for various types of medical images. These findings are intended to support improvements in diagnostic accuracy and efficiency in medical imaging, offering a valuable reference for enhancing image-based diagnostic processes.
Article
Full-text available
X-ray image enhancement can aid a physician’s diagnosis by improving lesion visibility. This study proposes a chest X-ray image enhancement framework for enhancing lesion visibility while preserving image features. Our framework assesses the background signals, whereas conventional methods focus on the visibility of the global image. The proposed method predicts the image processing parameters that enhance the lesion signals via the inference neural network. The framework consists of an X-ray image enhancer and an enhanced model predictor for reference. The enhancer regressively estimates the processing parameters for enhancing the lesions using the inference network and processes the input X-ray image. As the inference network requires training, the model predictor computes the reference parameters that maximize the visibility of the lesions within a tolerable loss of fidelity using image pairs—with and without lesions. We created a synthesized dataset, with and without lesions, from healthy chest and phantom lesion X-ray images. The experiments show that after the proposed method was trained on 2000 images, it improved lesion visibility with an acceptable fidelity loss. We also performed pairwise comparisons and confirmed that trade-offs between fidelity loss and visibility gain were attained. A technique for improving lesion visibility while maintaining the fidelity of X-ray images was developed. This method enabled the enhancement of specific signals in the background. Various image processing methods that require parameters could be incorporated into this framework for many different applications.
Article
Full-text available
Medical imaging is a critical tool for diagnosing and treating various diseases such as Chronic Obstructive Pulmonary Disease (COPD), tuberculosis, lung cancer, and Coronavirus. Techniques such as X-rays, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET) play essential roles in identifying the physical and functional aspects of the lungs. Manual lung segmentation by radiologists, while adjustable, is time-consuming and subject to variability. Consequently, automated lung segmentation methods utilizing Machine Learning (ML) and Deep Learning (DL) have emerged as essential alternatives. This review highlights advancements in automated lung segmentation, focusing on traditional ML methods and state-of-the-art DL approaches, particularly Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs). While these techniques hold great promise, challenges remain, such as the need for annotated datasets, computational demands, and integration into clinical workflows. This paper explores current applications, identifies challenges, and outlines future opportunities for improving the precision and efficiency of lung segmentation through interdisciplinary collaboration in medical imaging, computer science, and clinical practice.
Article
The computerized delineation and prognosis of lung cancer is typically based on Computed Tomography (CT) image analysis, whereby the region of interest (ROI) is accurately demarcated and classified. Deep learning in computer vision provides a different perspective to image segmentation. Due to the increasing number of cases of lung cancer and the availability of large volumes of CT scans every day, the need for automated handling becomes imperative. This requires efficient delineation and diagnosis through the design of new techniques for improved accuracy. In this article, we introduce the novel Weighted Deformable U -Net (WD U -Net) for efficient delineation of the tumor region. It incorporates the Deformable Convolution (DC) that can model arbitrary geometric shapes of region of interests. This is enhanced by the Weight Generation (WG) module to suppress unimportant features while highlighting relevant ones. A new Focal Asymmetric Similarity (FAS) loss function helps handle class imbalance. Ablation studies and comparison with state-of-the-art models help establish the effectiveness of WD U -Net with ensemble learning, tested on five publicly available lung cancer datasets. Best results were obtained on the LIDC-IDRI lung tumor test dataset, with an average Dice score of 0.9137, the Hausdorff Distance 95% (HD95) of 5.3852, and Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) of 0.9449.
Article
Pneumonia is a severe health concern, particularly for vulnerable groups, needing early and correct classification for optimal treatment. This study addresses the use of deep learning combined with machine learning classifiers (DLxMLCs) for pneumonia classification from chest X-ray (CXR) images. We deployed modified VGG19, ResNet50V2, and DenseNet121 models for feature extraction, followed by five machine learning classifiers (logistic regression, support vector machine, decision tree, random forest, artificial neural network). The approach we suggested displayed remarkable accuracy, with VGG19 and DenseNet121 models obtaining 99.98% accuracy when combined with random forest or decision tree classifiers. ResNet50V2 achieved 99.25% accuracy with random forest. These results illustrate the advantages of merging deep learning models with machine learning classifiers in boosting the speedy and accurate identification of pneumonia. The study underlines the potential of DLxMLC systems in enhancing diagnostic accuracy and efficiency. By integrating these models into clinical practice, healthcare practitioners could greatly boost patient care and results. Future research should focus on refining these models and exploring their application to other medical imaging tasks, as well as including explainability methodologies to better understand their decision-making processes and build trust in their clinical use. This technique promises promising breakthroughs in medical imaging and patient management.
Article
Full-text available
In this study, multiple lung diseases are diagnosed with the help of the Neural Network algorithm. Specifically, Emphysema, Infiltration, Mass, Pleural Thickening, Pneumonia, Pneumothorax, Atelectasis, Edema, Effusion, Hernia, Cardiomegaly, Pulmonary Fibrosis, Nodule, and Consolidation, are studied from the ChestX-ray14 dataset. A proposed fine-tuned MobileLungNetV2 model is employed for analysis. Initially, pre-processing is done on the X-ray images from the dataset using CLAHE to increase image contrast. Additionally, a Gaussian Filter, to denoise images, and data augmentation methods are used. The pre-processed images are fed into several transfer learning models; such as InceptionV3, AlexNet, DenseNet121, VGG19, and MobileNetV2. Among these models, MobileNetV2 performed with the highest accuracy of 91.6% in overall classifying lesions on Chest X-ray Images. This model is then fine-tuned to optimise the MobileLungNetV2 model. On the pre-processed data, the fine-tuned model, MobileLungNetV2, achieves an extraordinary classification accuracy of 96.97%. Using a confusion matrix for all the classes, it is determined that the model has an overall high precision, recall, and specificity scores of 96.71%, 96.83% and 99.78% respectively. The study employs the Grad-cam output to determine the heatmap of disease detection. The proposed model shows promising results in classifying multiple lesions on Chest X-ray images.
Article
Full-text available
According to the World Health Organization, millions of infections and a lot of deaths have been recorded worldwide since the emergence of the coronavirus disease (COVID-19). Since 2020, a lot of computer science researchers have used convolutional neural networks (CNNs) to develop interesting frameworks to detect this disease. However, poor feature extraction from the chest X-ray images and the high computational cost of the available models introduce difficulties for an accurate and fast COVID-19 detection framework. Moreover, poor feature extraction has caused the issue of ‘the curse of dimensionality’, which will negatively affect the performance of the model. Feature selection is typically considered as a preprocessing mechanism to find an optimal subset of features from a given set of all features in the data mining process. Thus, the major purpose of this study is to offer an accurate and efficient approach for extracting COVID-19 features from chest X-rays that is also less computationally expensive than earlier approaches. To achieve the specified goal, we design a mechanism for feature extraction based on shallow conventional neural network (SCNN) and used an effective method for selecting features by utilizing the newly developed optimization algorithm, Q-Learning Embedded Sine Cosine Algorithm (QLESCA). Support vector machines (SVMs) are used as a classifier. Five publicly available chest X-ray image datasets, consisting of 4848 COVID-19 images and 8669 non-COVID-19 images, are used to train and evaluate the proposed model. The performance of the QLESCA is evaluated against nine recent optimization algorithms. The proposed method is able to achieve the highest accuracy of 97.8086% while reducing the number of features from 100 to 38. Experiments prove that the accuracy of the model improves with the usage of the QLESCA as the dimensionality reduction technique by selecting relevant features. Graphical abstract
Article
Full-text available
Explainable Artificial Intelligence is a key component of artificially intelligent systems that aim to explain the classification results. The classification results explanation is essential for automatic disease diagnosis in healthcare. The human respiration system is badly affected by different chest pulmonary diseases. Automatic classification and explanation can be used to detect these lung diseases. In this paper, we introduced a CNN-based transfer learning-based approach for automatically explaining pulmonary diseases, i.e., edema, tuberculosis, nodules, and pneumonia from chest radiographs. Among these pulmonary diseases, pneumonia, which COVID-19 causes, is deadly; therefore, radiographs of COVID-19 are used for the explanation task. We used the ResNet50 neural network and trained the network on extensive training with the COVID-CT dataset and the COVIDNet dataset. The interpretable model LIME is used for the explanation of classification results. Lime highlights the input image’s important features for generating the classification result. We evaluated the explanation using radiologists’ highlighted images and identified that our model highlights and explains the same regions. We achieved improved classification results with our fine-tuned model with an accuracy of 93% and 97%, respectively. The analysis of our results indicates that this research not only improves the classification results but also provides an explanation of pulmonary diseases with advanced deep-learning methods. This research would assist radiologists with automatic disease detection and explanations, which are used to make clinical decisions and assist in diagnosing and treating pulmonary diseases in the early stage.
Article
Full-text available
Artificial intelligence has significantly enhanced the research paradigm and spectrum with a substantiated promise of continuous applicability in the real world domain. Artificial intelligence, the driving force of the current technological revolution, has been used in many frontiers, including education, security, gaming, finance, robotics, autonomous systems, entertainment, and most importantly the healthcare sector. With the rise of the COVID-19 pandemic, several prediction and detection methods using artificial intelligence have been employed to understand, forecast, handle, and curtail the ensuing threats. In this study, the most recent related publications, methodologies and medical reports were investigated with the purpose of studying artificial intelligence's role in the pandemic. This study presents a comprehensive review of artificial intelligence with specific attention to machine learning, deep learning, image processing, object detection, image segmentation, and few-shot learning studies that were utilized in several tasks related to COVID-19. In particular, genetic analysis, medical image analysis, clinical data analysis, sound analysis, biomedical data classification , socio-demographic data analysis, anomaly detection, health monitoring, personal protective equipment (PPE) observation, social control, and COVID-19 patients' mortality risk approaches were used in this study to forecast the threatening factors of COVID-19. This study demonstrates that artificial-intelligence-based algorithms integrated into Internet of Things wearable devices were quite effective and efficient in COVID-19 detection and forecasting insights which were actionable through wide usage. The results produced by the study prove that artificial intelligence is a promising arena of research that can be applied for disease prognosis, disease forecasting, drug discovery, and to the development of the healthcare sector on a global scale. We prove that artificial intelligence indeed played a significantly important role in helping to fight against COVID-19, and the insightful knowledge provided here could be extremely beneficial for practitioners and research experts in the healthcare domain to implement the artificial-intelligence-based systems in curbing the next pandemic or healthcare disaster.
Article
Full-text available
This paper proposes a new deep learning (DL) framework for the analysis of lung diseases, including COVID-19 and pneumonia, from chest CT scans and X-ray (CXR) images. This framework is termed optimized DenseNet201 for lung diseases (LDDNet). The proposed LDDNet was developed using additional layers of 2D global average pooling, dense and dropout layers, and batch normalization to the base DenseNet201 model. There are 1024 Relu-activated dense layers and 256 dense layers using the sigmoid activation method. The hyper-parameters of the model, including the learning rate, batch size, epochs, and dropout rate, were tuned for the model. Next, three datasets of lung diseases were formed from separate open-access sources. One was a CT scan dataset containing 1043 images. Two X-ray datasets comprising images of COVID-19-affected lungs, pneumonia-affected lungs, and healthy lungs exist, with one being an imbalanced dataset with 5935 images and the other being a balanced dataset with 5002 images. The performance of each model was analyzed using the Adam, Nadam, and SGD optimizers. The best results have been obtained for both the CT scan and CXR datasets using the Nadam optimizer. For the CT scan images, LDDNet showed a COVID-19-positive classification accuracy of 99.36%, a 100% precision recall of 98%, and an F1 score of 99%. For the X-ray dataset of 5935 images, LDDNet provides a 99.55% accuracy, 73% recall, 100% precision, and 85% F1 score using the Nadam optimizer in detecting COVID-19-affected patients. For the balanced X-ray dataset, LDDNet provides a 97.07% classification accuracy. For a given set of parameters, the performance results of LDDNet are better than the existing algorithms of ResNet152V2 and XceptionNet.
Article
Full-text available
COVID-19, a worldwide pandemic that has affected many people and thousands of individuals have died due to COVID-19, during the last two years. Due to the benefits of Artificial Intelligence (AI) in X-ray image interpretation, sound analysis, diagnosis, patient monitoring, and CT image identification, it has been further researched in the area of medical science during the period of COVID-19. This study has assessed the performance and investigated different machine learning (ML), deep learning (DL), and combinations of various ML, DL, and AI approaches that have been employed in recent studies with diverse data formats to combat the problems that have arisen due to the COVID-19 pandemic. Finally, this study shows the comparison among the stand-alone ML and DL-based research works regarding the COVID-19 issues with the combinations of ML, DL, and AI-based research works. After in-depth analysis and comparison, this study responds to the proposed research questions and presents the future research directions in this context. This review work will guide different research groups to develop viable applications based on ML, DL, and AI models, and will also guide healthcare institutes, researchers, and governments by showing them how these techniques can ease the process of tackling the COVID-19.
Article
Full-text available
Deep transfer learning (DTL), which incorporates new ideas from deep neural networks into transfer learning (TL), has achieved excellent success in computer vision, text classification, behavior recognition, and natural language processing. As a branch of machine learning, DTL applies end-to-end learning to overcome the drawback of traditional machine learning that regards each dataset individually. Although some valuable and impressive general surveys exist on TL, special attention and recent advances in DTL are lacking. In this survey, we first review more than 50 representative approaches of DTL in the last decade and systematically summarize them into four categories. In particular, we further divide each category into subcategories according to models, functions, and operation objects. In addition, we discuss recent advances in TL in other fields and unsupervised TL. Finally, we provide some possible and exciting future research directions.
Conference Paper
Full-text available
The state of the art of artificial intelligence (AI) for various medical imaging applications leads to enhanced accuracy, analysis, visualization, and interpretation of chest X-ray (CXR) images for diagnosis. Many diseases are diagnosed based on CXR images. In this paper, two types of abnormalities are diagnosed based on AI techniques. The two classes are atelectasis and cardiomegaly. The acquired images are segmented to localize the chest region and then enhanced using gray-level transformation methods. The enhanced images are passed to two pretrained convolutional neural networks (CNNs): shuffle and mobile net. The transfer learning approach is utilized in this stage. The automated features are extracted from the last fully connected layer. Each CNN deserves to have the two most representative features for the two classes. These four features are passed to support the vector machine classifier. The training accuracy reached 100% and the test accuracy was 96.7%. The proposed method can be extended to be a milestone in the classification of all heart-lung diseases that can be diagnosed using chest X-ray images.
Article
Full-text available
Chest and lung diseases are among the most serious chronic diseases in the world, and they occur as a result of factors such as smoking, air pollution, or bacterial infection, which would expose the respiratory system and chest to serious disorders. Chest diseases lead to a natural weakness in the respiratory system, which requires the patient to take care and attention to alleviate this problem. Countries are interested in encouraging medical research and monitoring the spread of communicable diseases. Therefore, they advised researchers to perform studies to curb the diseases’ spread and urged researchers to devise methods for swiftly and readily detecting and distinguishing lung diseases. In this paper, we propose a hybrid architecture of contrast-limited adaptive histogram equalization (CLAHE) and deep convolutional network for the classification of lung diseases. We used X-ray images to create a convolutional neural network (CNN) for early identification and categorization of lung diseases. Initially, the proposed method implemented the support vector machine to classify the images with and without using CLAHE equalizer. The obtained results were compared with the CNN networks. Later, two different experiments were implemented with hybrid architecture of deep CNN networks and CLAHE as a preprocessing for image enhancement. The experimental results indicate that the suggested hybrid architecture outperforms traditional methods by roughly 20% in terms of accuracy
Article
Full-text available
Background and Motivation: COVID-19 has resulted in a massive loss of life during the last two years. The current imaging-based diagnostic methods for COVID-19 detection in multiclass pneumonia-type chest X-rays are not so successful in clinical practice due to high error rates. Our hypothesis states that if we can have a segmentation-based classification error rate <5%, typically adopted for 510 (K) regulatory purposes, the diagnostic system can be adapted in clinical settings. Method: This study proposes 16 types of segmentation-based classification deep learning-based systems for automatic, rapid, and precise detection of COVID-19. The two deep learning-based segmentation networks, namely UNet and UNet+, along with eight classification models, namely VGG16, VGG19, Xception, InceptionV3, Densenet201, NASNetMobile, Resnet50, and MobileNet, were applied to select the best-suited combination of networks. Using the cross-entropy loss function, the system performance was evaluated by Dice, Jaccard, area-under-the-curve (AUC), and receiver operating characteristics (ROC) and validated using Grad-CAM in explainable AI framework. Results: The best performing segmentation model was UNet, which exhibited the accuracy, loss, Dice, Jaccard, and AUC of 96.35%, 0.15%, 94.88%, 90.38%, and 0.99 (p-value <0.0001), respectively. The best performing segmentation-based classification model was UNet+Xception, which exhibited the accuracy, precision, recall, F1-score, and AUC of 97.45%, 97.46%, 97.45%, 97.43%, and 0.998 (p-value <0.0001), respectively. Our system outperformed existing methods for segmentation-based classification models. The mean improvement of the UNet+Xception system over all the remaining studies was 8.27%. Conclusion: The segmentation-based classification is a viable option as the hypothesis (error rate <5%) holds true and is thus adaptable in clinical practice.
Article
Full-text available
To accurately diagnose multiple lung diseases from chest X-rays, the critical aspect is to identify lung diseases with high sensitivity and specificity. This study proposed a novel multi-class classification framework that minimises either false positives or false negatives that is useful in computer aided diagnosis or computer aided detection respectively. To minimise false positives or false negatives, we generated respective stacked ensemble from pre-trained models and fully connected layers using selection metric and systematic method. The diversity of base classifiers was based on diverse set of false positives or false negatives generated. The proposed multi-class framework was evaluated on two chest X-ray datasets, and the performance was compared with the existing models and base classifiers. Moreover, we used LIME (Local Interpretable Model-agnostic Explanations) to locate the regions focused by the multi-class classification framework.
Article
Full-text available
Globally, coal remains one of the natural resources that provide power to the world. Thousands of people are involved in coal collection, processing, and transportation. Particulate coal dust is produced during these processes, which can crush the lung structure of workers and cause pneumoconiosis. There is no automated system for detecting and monitoring diseases in coal miners, except for specialist radiologists. This paper proposes ensemble learning techniques for detecting pneumoconiosis disease in chest X-ray radiographs (CXRs) using multiple deep learning models. Three ensemble learning techniques (simple averaging, multi-weighted averaging, and majority voting (MVOT)) were proposed to investigate performances using randomised cross-folds and leave-one-out cross-validations datasets. Five statistical measurements were used to compare the outcomes of the three investigations on the proposed integrated approach with state-of-the-art approaches from the literature for the same dataset. In the second investigation, the statistical combination was marginally enhanced in the ensemble of multi-weighted averaging on a robust model, CheXNet. However, in the third investigation, the same model elevated accuracies from 87.80 to 90.2%. The investigated results helped us identify a robust deep learning model and ensemble framework that outperformed others, achieving an accuracy of 91.50% in the automated detection of pneumoconiosis.
Article
Full-text available
Introduction Pneumonia is a microorganism infection that causes chronic inflammation of the human lung cells. Chest X-ray imaging is the most well-known screening approach used for detecting pneumonia in the early stages. While chest-Xray images are mostly blurry with low illumination, a strong feature extraction approach is required for promising identification performance. Objectives A new hybrid explainable deep learning framework is proposed for accurate pneumonia disease identification using chest X-ray images. Methods The proposed hybrid workflow is developed by fusing the capabilities of both ensemble convolutional networks and the Transformer Encoder mechanism. The ensemble learning backbone is used to extract strong features from the raw input X-ray images in two different scenarios: ensemble A (i.e., DenseNet201, VGG16, and GoogleNet) and ensemble B (i.e., DenseNet201, InceptionResNetV2, and Xception). Whereas, the Transformer Encoder is built based on the self-attention mechanism with multilayer perceptron (MLP) for accurate disease identification. The visual explainable saliency maps are derived to emphasize the crucial predicted regions on the input X-ray images. The end-to-end training process of the proposed deep learning models over all scenarios is performed for binary and multi-class classification tasks. Results The proposed hybrid deep learning model recorded 99.21% classification performance in terms of overall accuracy and F1-score for the binary classification task, while it achieved 98.19% accuracy and 97.29% F1-score for multi-classification task. For the ensemble binary identification scenario, ensemble A recorded 97.22% accuracy and 97.14% F1-score, while ensemble B achieved 96.44% for both accuracy and F1-score. For the ensemble, multiclass identification scenario, ensemble A recorded 97.2% accuracy and 95.8% F1-score, while ensemble B recorded 96.4% accuracy and 94.9% F1-score. Conclusion The proposed hybrid deep learning framework could provide promising and encouraging explainable identification performance comparing with individual, ensemble models, or even the latest models in the literature. The code is available here: https://github.com/chiagoziemchima/Pneumonia_Identificaton.
Article
Full-text available
Background and Motivation: COVID-19 has resulted in a massive loss of life during the last two years. The current imaging-based diagnostic methods for COVID-19 detection in multiclass pneumonia-type chest X-rays are not so successful in clinical practice due to high error rates. Our hypothesis states that if we can have a segmentation-based classification error rate <5%, typically adopted for 510 (K) regulatory purposes, the diagnostic system can be adapted in clinical settings. Method: This study proposes 16 types of segmentation-based classification deep learning-based systems for automatic, rapid, and precise detection of COVID-19. The two deep learning-based segmentation networks, namely UNet and UNet+, along with eight classification models, namely VGG16, VGG19, Xception, InceptionV3, Densenet201, NASNetMobile, Resnet50, and MobileNet, were applied to select the best-suited combination of networks. Using the cross-entropy loss function, the system performance was evaluated by Dice, Jaccard, area-under-the-curve (AUC), and receiver operating characteristics (ROC) and validated using Grad-CAM in explainable AI framework. Results: The best performing segmentation model was UNet, which exhibited the accuracy, loss, Dice, Jaccard, and AUC of 96.35%, 0.15%, 94.88%, 90.38%, and 0.99 (p-value <0.0001), respectively. The best performing segmentation-based classification model was UNet+Xception, which exhibited the accuracy, precision, recall, F1-score, and AUC of 97.45%, 97.46%, 97.45%, 97.43%, and 0.998 (p-value <0.0001), respectively. Our system outperformed existing methods for segmentation-based classification models. The mean improvement of the UNet+Xception system over all the remaining studies was 8.27%. Conclusion: The segmentation-based classification is a viable option as the hypothesis (error rate <5%) holds true and is thus adaptable in clinical practice.
Article
Full-text available
The novel corona virus disease (COVID-19) is a pandemic disease that is currently affecting over 200 countries around the world and more than 6 millions of people died in last 2 years. Early detection of COVID-19 can mitigate and control its spread. Reverse transcription polymerase chain reaction (RT-CPR), Chest X-ray (CXR) scan, and Computerized Tomography (CT) scan are used to identify the COVID-19. Chest X-ray image analysis is relatively time efficient than compared with RT-CPR and CT scan. Its cost-effectiveness make it a good choice for COVID-19 Classification. We propose a deep learning based Convolutional Neural Network model for detection of COVID-19 from CXR. Chest X-ray images are collected from various sources dataset for training with augmentation and evaluating our model, which is widely used for COVID-19 detection and diagnosis. A Deep Convolutional neural network (CNN) based model for analysis of COVID-19 with data augmentation is proposed, which uses the patient’s chest X-ray images for the diagnosis of COVID-19 with an aim to help the physicians to assist the diagnostic process among high workload conditions. The overall accuracy of 93 percent for COVID-19 Classification is achieved by choosing best optimizer.
Article
Full-text available
Diagnosing COVID-19, current pandemic disease using Chest X-ray images is widely used to evaluate the lung disorders. As the spread of the disease is enormous many medical camps are being conducted to screen the patients and Chest X-ray is a simple imaging modality to detect presence of lung disorders. Manual lung disorder detection using Chest X-ray by radiologist is a tedious process and may lead to inter and intra-rate errors. Various deep convolution neural network techniques were tested for detecting COVID-19 abnormalities in lungs using Chest X-ray images. This paper proposes deep learning model to classify COVID-19 and normal chest X-ray images. Experiments are carried out for deep feature extraction, fine-tuning of convolutional neural networks (CNN) hyper parameters, and end-to-end training of four variants of the CNN model. The proposed CovMnet provide better classification accuracy of 97.4% for COVID-19 /normal than those reported in the previous studies. The proposed CovMnet model has potential to aid radiologist to monitor COVID-19 disease and proves to be an efficient non-invasive COVID-19 diagnostic tool for lung disorders.
Article
Full-text available
The COVID-19 pandemic has caused a devastating impact on the social activity, economy and politics worldwide. Techniques to diagnose COVID-19 cases by examining anomalies in chest X-ray images are urgently needed. Inspired by the success of deep learning in various tasks, this paper evaluates the performance of four deep neural networks in detecting COVID-19 patients from their chest radiographs. The deep neural networks studied include VGG16, MobileNet, ResNet50 and DenseNet201. Preliminary experiments show that all deep neural networks perform promisingly, while DenseNet201 outshines other models. Nevertheless, the sensitivity rates of the models are below expectations, which can be attributed to several factors: limited publicly available COVID-19 images, imbalanced sample size for the COVID-19 class and non-COVID-19 class, overfitting or underfitting of the deep neural networks and that the feature extraction of pre-trained models does not adapt well to the COVID-19 detection task. To address these factors, several enhancements are proposed, including data augmentation, adjusted class weights, early stopping and fine-tuning, to improve the performance. Empirical results on DenseNet201 with these enhancements demonstrate outstanding performance with an accuracy of 0.999%, precision of 0.9899%, sensitivity of 0.98%, specificity of 0.9997% and F1-score of 0.9849% on the COVID-Xray-5k dataset.
Article
Full-text available
Objective To report the global, regional, and national burden of chronic obstructive pulmonary disease (COPD) and its attributable risk factors between 1990 and 2019, by age, sex, and sociodemographic index. Design Systematic analysis. Data source Global Burden of Disease Study 2019. Main outcome measures Data on the prevalence, deaths, and disability adjusted life years (DALYs) of COPD, and its attributable risk factors, were retrieved from the Global Burden of Disease 2019 project for 204 countries and territories, between 1990 and 2019. The counts and rates per 100 000 population, along with 95% uncertainty intervals, were presented for each estimate. Results In 2019, 212.3 million prevalent cases of COPD were reported globally, with COPD accounting for 3.3 million deaths and 74.4 million DALYs. The global age standardised point prevalence, death, and DALY rates for COPD were 2638.2 (95% uncertainty intervals 2492.2 to 2796.1), 42.5 (37.6 to 46.3), and 926.1 (848.8 to 997.7) per 100 000 population, which were 8.7%, 41.7%, and 39.8% lower than in 1990, respectively. In 2019, Denmark (4299.5), Myanmar (3963.7), and Belgium (3927.7) had the highest age standardised point prevalence of COPD. Egypt (62.0%), Georgia (54.9%), and Nicaragua (51.6%) showed the largest increases in age standardised point prevalence across the study period. In 2019, Nepal (182.5) and Japan (7.4) had the highest and lowest age standardised death rates per 100 000, respectively, and Nepal (3318.4) and Barbados (177.7) had the highest and lowest age standardised DALY rates per 100 000, respectively. In men, the global DALY rate of COPD increased up to age 85-89 years and then decreased with advancing age, whereas for women the rate increased up to the oldest age group (≥95 years). Regionally, an overall reversed V shaped association was found between sociodemographic index and the age standardised DALY rate of COPD. Factors contributing most to the DALYs rates for COPD were smoking (46.0%), pollution from ambient particulate matter (20.7%), and occupational exposure to particulate matter, gases, and fumes (15.6%). Conclusions Despite the decreasing burden of COPD, this disease remains a major public health problem, especially in countries with a low sociodemographic index. Preventive programmes should focus on smoking cessation, improving air quality, and reducing occupational exposures to further reduce the burden of COPD.
Article
Full-text available
Today Emphysema, which takes place among the top five diseases, is encountered in the western world in terms of rehabilitation and healthcare costs. Diagnosis of this type of respiratory tract disease with the help of computers is gradually increasing its importance. In this study, we aimed to classify it with the transfer learning method by using single labeled emphysema diagnosed data which is obtained from three large data sets. We classified the images that are obtained from ChestX-ray14, CheXpert, and PadChest databases by 95\% of Area Under the Curve (AUC) with the fully connected layer model and DenseNet-121 pre-trained neural network and 90\% of Area Under the Curve (AUC) with Xception pre-trained neural network. We evaluated this proposed deep learning-based model as an effective and practical diagnostic tool for emphysema alone, using x-ray data. Notably, transfer learning is a very functional approach in terms of differentiation between normal and patient in similar diseases that have just emerged in the pandemic period that we live in.
Article
Full-text available
Although deep learning-based computer-aided diagnosis systems have recently achieved expert-level performance, developing a robust model requires large, high-quality data with annotations that are expensive to obtain. This situation poses a conundrum that annually-collected chest x-rays cannot be utilized due to the absence of labels, especially in deprived areas. In this study, we present a framework named distillation for self-supervision and self-train learning (DISTL) inspired by the learning process of the radiologists, which can improve the performance of vision transformer simultaneously with self-supervision and self-training through knowledge distillation. In external validation from three hospitals for diagnosis of tuberculosis, pneumothorax, and COVID-19, DISTL offers gradually improved performance as the amount of unlabeled data increase, even better than the fully supervised model with the same amount of labeled data. We additionally show that the model obtained with DISTL is robust to various real-world nuisances, offering better applicability in clinical setting.
Article
Full-text available
Pneumonia is one of the diseases that causes the most fatalities worldwide, especially in children. Recently, pneumonia-caused deaths have increased dramatically due to the novel Coronavirus global pandemic. Chest X-ray (CXR) images are one of the most readily available and common imaging modality for the detection and identification of pneumonia. However, the detection of pneumonia from chest radiography is a difficult task even for experienced radiologists. Artificial Intelligence (AI) based systems have great potential in assisting in quick and accurate diagnosis of pneumonia from chest X-rays. The aim of this study is to develop a Neural Architecture Search (NAS) method to find the best convolutional architecture capable of detecting pneumonia from chest X-rays. We propose a Learning by Teaching framework inspired by the teaching-driven learning methodology from humans, and conduct experiments on a pneumonia chest X-ray dataset with over 5000 images. Our proposed method yields an area under ROC curve (AUC) of 97.6% for pneumonia detection, which improves upon previous NAS methods by 5.1% (absolute).
Article
Full-text available
This paper proposes a multichannel deep learning approach for lung disease detection using chest X-rays. The multichannel models used in this work are EfficientNetB0, EfficientNetB1, and EfficientNetB2 pretrained models. The features from EfficientNet models are fused together. Next, the fused features are passed into more than one non-linear fully connected layer. Finally, the features passed into a stacked ensemble learning classifier for lung disease detection. The stacked ensemble learning classifier contains random forest and SVM in the first stage and logistic regression in the second stage for lung disease detection. The performance of the proposed method is studied in detail for more than one lung disease such as pneumonia, Tuberculosis (TB), and COVID-19. The performances of the proposed method for lung disease detection using chest X-rays compared with similar methods with the aim to show that the method is robust and has the capability to achieve better performances. In all the experiments on lung disease, the proposed method showed better performance and outperformed similar lung disease existing methods. This indicates that the proposed method is robust and generalizable on unseen chest X-rays data samples. To ensure that the features learnt by the proposed method is optimal, t-SNE feature visualization was shown on all three lung disease models. Overall, the proposed method has shown 98% detection accuracy for pediatric pneumonia lung disease, 99% detection accuracy for TB lung disease, and 98% detection accuracy for COVID-19 lung disease. The proposed method can be used as a tool for point-of-care diagnosis by healthcare radiologists.
Article
Full-text available
Respiratory diseases have been known to be a main cause of death worldwide. Pneumonia and Covid-19 are two of the dominant diseases. Several deep learning based studies are available in the literature that classifies infection conditions in chest X-ray images. In addition, image segmentation has been also applied to obtain promising results in deep learning approaches. This paper fo-cuses on using a modified version of the U-Net architecture to conduct segmen-tation on chest X-rays and then use segmented images for classification to assess the impact on the performance. We achieved an Intersection over Union of 93.53% with the proposed modified U-Net architecture and achieved 99.36% accuracy on segmentation aided ensemble classification.
Article
Full-text available
The rapid spread of COVID-19 across the globe since its emergence has pushed many countries' healthcare systems to the verge of collapse. To restrict the spread of the disease and lessen the ongoing cost on the healthcare system, it is critical to appropriately identify COVID-19-positive individuals and isolate them as soon as possible. The primary COVID-19 screening test, RT-PCR, although accurate and reliable, has a long turn-around time. More recently, various researchers have demonstrated the use of deep learning approaches on chest X-ray (CXR) for COVID-19 detection. However, existing Deep Convolutional Neural Network (CNN) methods fail to capture the global context due to their inherent image-specific inductive bias. In this article, we investigated the use of vision transformers (ViT) for detecting COVID-19 in Chest X-ray (CXR) images. Several ViT models were fine-tuned for the multiclass classification problem (COVID-19, Pneumonia and Normal cases). A dataset consisting of 7598 COVID-19 CXR images, 8552 CXR for healthy patients and 5674 for Pneumonia CXR were used. The obtained results achieved high performance with an Area Under Curve (AUC) of 0.99 for multi-class classification (COVID-19 vs. Other Pneumonia vs. normal). The sensitivity of the COVID-19 class achieved 0.99. We demonstrated that the obtained results outperformed comparable state-of-the-art models for detecting COVID-19 on CXR images using CNN architectures. The attention map for the proposed model showed that our model is able to efficiently identify the signs of COVID-19.
Article
Full-text available
A Deep learning models with pre-trained CNN architecture have made a major impact on chest radiology field such as Chest X-Ray radiography. Many of deep models used publicly available Chest X-ray14 dataset for training and testing purposes. This dataset has 14 different thoracic pathology distribution and is split into Image-wise and patient-wise. Existing researchers used resized Chest X-Ray14 images to avoid computational complexity, which resulted in loss of image resolution. Existing models are trained to concentrate on the whole image rather than more on pathologically significant regions of X-Ray. The Chest X-Ray14 dataset is imbalanced 14 pathology distribution, which affected model performance especially on minority pathology classes. Our proposed deep model is designed to address all these issues. A DenseNet-121 is used as a core block supported by Grad-CAM-based attention mechanism as a supporting block. This attention mechanism guides backbone network to concentrate more on abnormal regions of the image instead of entire resized images of above dataset. The proposed model is trained with Adaptive Augmentation training, which aims to address imbalance effect of the dataset. The performance comparison of our deep model is done with other existing models under both split of database. We achieved high average AUC of 84.55%, 90.2% respectively in both split of data.
Article
Full-text available
The COVID-19 pandemic has caused more than 200 million infected cases and 4 million deaths across the world. The pandemic has triggered a massive epidemic, with a significant effect on the health and lives of many people worldwide. Early detection of this disease is very important for maintaining social well-being. Generally, the RT-PCR test is a diagnosis method used for the detection of the COVID-19, yet it is not the only reliable diagnostic tool. In this study, we discuss the image-based modalities for the detection of coronavirus utilizing Deep Learning methodology, which is one of the most innovative technologies today and has proven to be an efficient solution for a number of medical conditions. Coronavirus affects the respiratory tract of individuals. One of the best ways is to identify this disease from chest radiography images. Early research demonstrated unique anomalies in chest radiographs of covid-positive patients. By using Deep Learning Multi-layered networks, we classified the chest images as covid positive or negative. The proposed model uses the dataset of patients infected with Coronavirus, in which the radiologist indicated multilobar involvements in the chest X-rays. A total of 6500 images have been considered for the study. The convolutional network (CNN) model was trained and a validation accuracy of 94% is obtained.
Article
Full-text available
The COVID-19 pandemic continues to wreak havoc on the world’s population’s health and well-being. Successful screening of infected patients is a critical step in the fight against it, with radiology examination using chest radiography being one of the most important screening methods. For the definitive diagnosis of COVID-19 disease, reverse-transcriptase polymerase chain reaction remains the gold standard. Currently available lab tests may not be able to detect all infected individuals; new screening methods are required. We propose a Multi-Input Transfer Learning COVID-Net fuzzy convolutional neural network to detect COVID-19 instances from torso X-ray, motivated by the latter and the open-source efforts in this research area. Furthermore, we use an explainability method to investigate several Convolutional Networks COVID-Net forecasts in an effort to not only gain deeper insights into critical factors associated with COVID-19 instances, but also to aid clinicians in improving screening. We show that using transfer learning and pre-trained models, we can detect it with a high degree of accuracy. Using X-ray images, we chose four neural networks to predict its probability. Finally, in order to achieve better results, we considered various methods to verify the techniques proposed here. As a result, we were able to create a model with an AUC of 1.0 and accuracy, precision, and recall of 0.97. The model was quantized for use in Internet of Things devices and maintained a 0.95 percent accuracy.
Article
Full-text available
COVID-19 disease is a major health calamity in twentieth century, in which the infection is spreading at the global level. Developing countries like Bangladesh, India, and others are still facing a delay in recognizing COVID-19 cases. Hence, there is a need for immediate recognition with perfect identification of infection. This clear visualization helps to save the life of suspected COVID-19 patients. With the help of traditional RT-PCR testing, the combination of medical images and deep learning classifiers delivers more hopeful results with high accuracy in the prediction and recognition of COVID-19 cases. COVID-19 dis- ease is recently researched through sample chest X-ray images, which have already proven its efficiency in lung diseases. To emphasize corona virus testing methods and to control the community spreading, the automatic detection process of COVID-19 is processed through the detailed medication reports from medical images. Although there are numer- ous challenges in the manual understanding of traces in COVID-19 infection from X-ray, the subtle differences among normal and infected X-rays can be traced by the data patterns of Convolutional Neural Network (CNN). To improve the detection performance of CNN, this paper plans to develop an Ensemble Learning with CNN-based Deep Features (EL- CNN-DF). In the initial phase, image scaling and median filtering perform the pre-process- ing of the chest X-ray images gathered from the benchmark source. The second phase is lung segmentation, which is the significant step for COVID detection. It is accomplished by the Adaptive Activation Function-based U-Net (AAF-U-Net). Once the lungs are seg- mented, it is subjected to novel EL-CNN-DF, in which the deep features are extracted from the pooling layer of CNN, and the fully connected layer of CNN are replaced with the three classifiers termed “Support Vector Machine (SVM), Autoencoder, Naive Bayes (NB)”. The final detection of COVID-19 is done by these classifiers, in which high ranking strategy is utilized. As a modification, a Self Adaptive-Tunicate Swarm Algorithm (SA-TSA) is adopted as a boosting algorithm to enhance the performance of segmentation and detec- tion. The overall analysis has shown that the precision of the enhanced CNN by using SA- TSA was 1.02%, 4.63%, 3.38%, 1.62%, 1.51% and 1.04% better than SVM, autoencoder, NB, Ensemble, RNN and LSTM respectively. The comparative performance analysis on existing model proves that the proposed algorithm is better than other algorithms in terms of segmentation and classification of COVID-19 detection.
Article
Full-text available
In most regions of the world, tuberculosis (TB) is classified as a malignant infectious disease that can be fatal. Using advanced tools and technology, automatic analysis and classification of chest X-rays (CXRs) into TB and non-TB can be a reliable alternative to the subjective assessment performed by healthcare professionals. Thus, in the study, we propose an automatic TB detection system using advanced deep learning (DL) models. A significant portion of a CXR image is dark, providing no information for diagnosis and potentially confusing DL models. Therefore, in the proposed system, we use sophisticated segmentation networks to extract the region of interest from multimedia CXRs. Then, segmented images are fed into the DL models. For the subjective assessment, we use explainable artificial intelligence to visualize TB-infected parts of the lung. We use different convolutional neural network (CNN) models in our experiments and compare their classification performance using three publicly available CXR datasets. EfficientNetB3, one of the CNN models, achieves the highest accuracy of 99.1%, with a receiver operating characteristic of 99.9%, and an average accuracy of 98.7%. Experiment results confirm that using segmented lung CXR images produces better performance than does using raw lung CXR images.
Article
Full-text available
The use of chest X-ray images (CXI) to detect Severe Acute Respiratory Syndrome Coronavirus 2 (SARS CoV-2) caused by Coronavirus Disease 2019 (COVID19) is life-saving important for both patients and doctors. This research proposes a multi-channel feature deep neural network (MFDNN) algorithm to screen people infected with COVID19. The algorithm integrates data over-sampling technology and MFDNN model to carry out the training. The oversampling technique reduces the deviation of the prior probability of the MFDNN algorithm on unbalanced data. Multi-channel feature fusion technology improves the efciency of feature extraction and the accuracy of model diagnosis. In the experiment, Compared with traditional deep learning models (VGG19, GoogLeNet, Resnet50, Desnet201), the MFDNN model obtains an average test accuracy of 93.19% in all data. Furthermore, in each type of screening, the precision, recall, and F1 Score of the MFDNN model are also better than traditional deep learning networks. Furthermore, through ablation experiments, we proved that a multi-channel convolutional neural network (CNN) is superior to single-channel CNN, additional layer and PSN module, and indirectly proved the sufciency and necessity of each step of the MFDNN classifcation method. Finally, our experimental code will be placed at https://github.com/panli angrui/covid19.
Article
Full-text available
Chest X-ray (CXR) imaging is one of the most widely used and economical tests to diagnose a wide range of diseases. However, even for expert radiologists, it is a challenge to accurately diagnose diseases from CXR samples. Furthermore, there remains an acute shortage of trained radiologists worldwide. In the present study, a range of machine learning (ML), deep learning (DL), and transfer learning (TL) approaches have been evaluated to classify diseases in an openly available CXR image dataset. A combination of the synthetic minority over-sampling technique (SMOTE) and weighted class balancing is used to alleviate the effects of class imbalance. A hybrid Inception-ResNet-v2 transfer learning model coupled with data augmentation and image enhancement gives the best accuracy. The model is deployed in an edge environment using Amazon IoT Core to automate the task of disease detection in CXR images with three categories, namely pneumonia, COVID-19, and normal. Comparative analysis has been given in various metrics such as precision, recall, accuracy, AUC-ROC score, etc. The proposed technique gives an average accuracy of 98.66%. The accuracies of other TL models, namely SqueezeNet, VGG19, ResNet50, and MobileNetV2 are 97.33%, 91.66%, 90.33%, and 76.00%, respectively. Further, a DL model, trained from scratch, gives an accuracy of 92.43%. Two feature-based ML classification techniques, namely support vector machine with local binary pattern (SVM + LBP) and decision tree with histogram of oriented gradients (DT + HOG) yield an accuracy of 87.98% and 86.87%, respectively.
Article
Full-text available
Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) has caused outbreaks of new coronavirus disease (COVID-19) around the world. Rapid and accurate detection of COVID-19 coronavirus is an important step in limiting the spread of the COVID-19 epidemic. To solve this problem, radiography techniques (such as chest X-rays and computed tomography (CT)) can play an important role in the early prediction of COVID-19 patients, which will help to treat patients in a timely manner. We aimed to quickly develop a highly efficient lightweight CNN architecture for detecting COVID-19-infected patients. The purpose of this paper is to propose a robust deep learning-based system for reliably detecting COVID-19 from chest X-ray images. First, we evaluate the performance of various pre-trained deep learning models (InceptionV3, Xception, MobileNetV2, NasNet and DenseNet201) recently proposed for medical image classification. Second, a lightweight shallow convolutional neural network (CNN) architecture is proposed for classifying X-ray images of a patient with a low false-negative rate. The data set used in this work contains 2,541 chest X-rays from two different public databases, which have confirmed COVID-19 positive and healthy cases. The performance of the proposed model is compared with the performance of pre-trained deep learning models. The results show that the proposed shallow CNN provides a maximum accuracy of 99.68% and more importantly sensitivity, specificity and AUC of 99.66%, 99.70% and 99.98%. The proposed model has fewer parameters and low complexity compared to other deep learning models. The experimental results of our proposed method show that it is superior to the existing state-of-the-art methods. We believe that this model can help healthcare professionals to treat COVID-19 patients through improved and faster patient screening.
Article
Full-text available
Using chest X-ray images is one of the least expensive and easiest ways to diagnose patients who suffer from lung diseases such as pneumonia and bronchitis. Inspired by existing work, a deep learning model is proposed to classify chest X-ray images into 14 lung-related pathological conditions. However, small datasets are not sufficient to train the deep learning model. Two methods were used to tackle this: (1) transfer learning based on two pretrained neural networks, DenseNet and ResNet, was employed; (2) data were preprocessed, including checking data leakage, handling class imbalance, and performing data augmentation, before feeding the neural network. The proposed model was evaluated according to the classification accuracy and receiver operating characteristic (ROC) curves, as well as visualized by class activation maps. DenseNet121 and ResNet50 were used in the simulations, and the results showed that the model trained by DenseNet121 had better accuracy than that trained by ResNet50.
Article
Full-text available
Accurate detection of COVID-19 is of immense importance to help physicians intervene with appropriate treatments. Although RT-PCR is routinely used for COVID-19 detection, it is expensive, takes a long time, and is prone to inaccurate results. Currently, medical imaging-based detection systems have been explored as an alternative for more accurate diagnosis. In this work, we propose a multi-level diagnostic framework for the accurate detection of COVID-19 using X-ray scans based on transfer learning. The developed framework consists of three stages, beginning with a pre-processing step to remove noise effects and image resizing followed by a deep learning architecture utilizing an Xception pre-trained model for feature extraction from the pre-processed image. Our design utilizes a global average pooling (GAP) layer for avoiding over-fitting, and an activation layer is added in order to reduce the losses. Final classification is achieved using a softmax layer. The system is evaluated using different activation functions and thresholds with different optimizers. We used a benchmark dataset from the kaggle website. The proposed model has been evaluated on 7395 images that consist of 3 classes (COVID-19, normal and pneumonia). Additionally, we compared our framework with the traditional pre-trained deep learning models and with other literature studies. Our evaluation using various metrics showed that our framework achieved a high test accuracy of 99.3% with a minimum loss of 0.02 using the LeakyReLU activation function at a threshold equal to 0.1 with the RMSprop optimizer. Additionally, we achieved a sensitivity and specificity of 99 and F1-Score of 99.3% with only 10 epochs and a 10−4 learning rate.
Article
Full-text available
The novel coronavirus (SARS-CoV-2) is spreading rapidly worldwide, and it has become a greater risk for human beings. To curb the community transmission of this virus, rapid detection and identification of the affected people via a quick diagnostic process are necessary. Media studies have shown that most COVID-19 victims endure lung disease. For rapid identification of the affected patient, chest CT scans and X-ray images have been reported to be suitable techniques. However, chest X-ray (CXR) shows more convenience than the CT imaging techniques because it has faster imaging times than CT and is also simple and cost-effective. Literature shows that transfer learning is one of the most successful techniques to analyze chest X-ray images and correctly identify various types of pneumonia. Since SVM has a remarkable aspect that tremendously provides good results using a small data set thus in this study we have used SVM machine learning algorithm to diagnose COVID-19 from chest X-ray images. The image processing tool called RGB and SqueezeNet models were used to get more images to diagnose the available data set. Our adopted model shows an accuracy of 98.8% to detect the COVID-19 affected patient from CXR images. It is expected that our proposed computer-aided detection tool (CAT) will play a key role in reducing the spread of infectious diseases in society through a faster patient screening process.
Chapter
Lung disease, which affects both adults and children, is on the rise worldwide. The following chapter provides a brief overview of some of the respiratory disorders currently affecting millions of people worldwide. The source of respiratory disease mortality and morbidity is uncertain; according to recent studies by the World Health Organization (WHO) and other agencies, nearly 400 million people worldwide suffer from mild to severe asthma and chronic obstructive lung disease (COPD) only. The pathophysiology of major lung diseases such as COPD, asthma, pneumonia, lung cancer, tuberculosis, and cystic fibrosis is discussed in this chapter, with their limitations on existing treatment approaches.
Article
COVID-19 pandemic is the main outbreak in the world, which has shown a bad impact on people's lives in more than 150 countries. The major steps in fighting COVID-19 are identifying the affected patients as early as possible and locating them with special care. Images from radiology and radiography are among the most effective tools for determining a patient's ailment. Recent studies have shown detailed abnormalities of affected patients with COVID-19 in the chest radiograms. The purpose of this work is to present a COVID-19 detection system with three key steps: “(i) preprocessing, (ii) Feature extraction, (iii) Classification.” Originally, the input image is given to the preprocessing step as its input, extracting the deep features and texture features from the preprocessed image. Particularly, it extracts the deep features by inceptionv3. Then, the features like proposed Local Vector Patterns (LVP) and Local Binary Pattern (LBP) are extracted from the preprocessed image. Moreover, the extracted features are subjected to the proposed ensemble model based classification phase, including Support Vector Machine (SVM), Convolutional Neural Network (CNN), Optimized Neural Network (NN), and Random Forest (RF). A novel Self Adaptive Kill Herd Optimization (SAKHO) approach is used to properly tune the weight of NN to improve classification accuracy and precision. The performance of the proposed method is then compared to the performance of the conventional approaches using a variety of metrics, including recall, FNR, MCC, FDR, Thread score, FPR, precision, FOR, accuracy, specificity, NPV, FMS, and sensitivity, accordingly.
Article
In 2019, the world experienced the rapid outbreak of the Covid-19 pandemic creating an alarming situation worldwide. The virus targets the respiratory system causing pneumonia with other symptoms such as fatigue, dry cough, and fever which can be mistakenly diagnosed as pneumonia, lung cancer, or TB. Thus, the early diagnosis of COVID-19 is critical since the disease can provoke patients’ mortality. Chest X-ray (CXR) is commonly employed in healthcare sector where both quick and precise diagnosis can be supplied. Deep learning algorithms have proved extraordinary capabilities in terms of lung diseases detection and classification. They facilitate and expedite the diagnosis process and save time for the medical practitioners. In this paper, a deep learning (DL) architecture for multi-class classification of Pneumonia, Lung Cancer, tuberculosis (TB), Lung Opacity, and most recently COVID-19 is proposed. Tremendous CXR images of 3615 COVID-19, 6012 Lung opacity, 5870 Pneumonia, 20,000 lung cancer, 1400 tuberculosis, and 10,192 normal images were resized, normalized, and randomly split to fit the DL requirements. In terms of classification, we utilized a pre-trained model, VGG19 followed by three blocks of convolutional neural network (CNN) as a feature extraction and fully connected network at the classification stage. The experimental results revealed that our proposed VGG19 + CNN outperformed other existing work with 96.48 % accuracy, 93.75 % recall, 97.56 % precision, 95.62 % F1 score, and 99.82 % area under the curve (AUC). The proposed model delivered superior performance allowing healthcare practitioners to diagnose and treat patients more quickly and efficiently.
Conference Paper
Lung disease is one of the diseases that can be cured when the disease is spotted in its early stages before getting accumulated. Early diagnosis can be promising to curable. But the most people fail to detect their disease before it comes to chronic. It leads to an increase in the death toll all around the world. Image Classification is the process of extracting information from an image that plays a major role in medical image classification. Convolutional Neural Network (CNN) model a type of deep learning architecture introduced to achieve the correct classification of lung disease. The proposed technique experimented on the Covid19 CXR dataset from the GitHub repository. The dataset has been gathered which comprises normal CXR and Covid19 CXR. The method employed is ResNet50 and DenseNet as a pre-trained classifier model along with softmax and sigmoid activation function respectively. The ResNet50 model provides promising results with a validation accuracy of 86.67 % and the DenseNet provides a validation accuracy of 98.33 %. The performance of each model is compared based on validation loss and validation accuracy. The classification concludes that the DenseNet model is more efficient than the ResNet50 model.
Article
Chest radiographs are widely used in the medical domain and at present, chest X-radiation particularly plays an important role in the diagnosis of medical conditions such as pneumonia and COVID-19 disease. The recent developments of deep learning techniques led to a promising performance in medical image classification and prediction tasks. With the availability of chest X-ray datasets and emerging trends in data engineering techniques, there is a growth in recent related publications. Recently, there have been only a few survey papers that addressed chest X-ray classification using deep learning techniques. However, they lack the analysis of the trends of recent studies. This systematic review paper explores and provides a comprehensive analysis of the related studies that have used deep learning techniques to analyse chest X-ray images. We present the state-of-the-art deep learning based pneumonia and COVID-19 detection solutions, trends in recent studies, publicly available datasets, guidance to follow a deep learning process, challenges and potential future research directions in this domain. The discoveries and the conclusions of the reviewed work have been organized in a way that researchers and developers working in the same domain can use this work to support them in taking decisions on their research.
Article
In the context of global pandemic Coronavirus disease 2019 (COVID-19) that threatens life of all human beings, it is of vital importance to achieve early detection of COVID-19 among symptomatic patients. In this paper, a computer aided diagnosis (CAD) model Cov-Net is proposed for accurate recognition of COVID-19 from chest X-ray images via machine vision techniques, which mainly concentrates on powerful and robust feature learning ability. In particular, a modified residual network with asymmetric convolution and attention mechanism embedded is selected as the backbone of feature extractor, after which skip-connected dilated convolution with varying dilation rates are applied to achieve sufficient feature fusion among high-level semantic and low-level detailed information. Experimental results on two public COVID-19 radiography databases have demonstrated the practicality of proposed Cov-Net in accurate COVID-19 recognition with accuracy of 0.9966 and 0.9901, respectively. Furthermore, within same experimental conditions, proposed Cov-Net outperforms other six state-of-the-art computer vision algorithms, which validates the superiority and competitiveness of Cov-Net in building highly discriminative features from the perspective of methodology. Hence, it is deemed that proposed Cov-Net have a good generalization ability so that it can be applied to other CAD scenarios. Consequently, one can conclude that this work has both practical value in providing reliable reference to the radiologist and theoretical significance in developing methods to build robust features with strong presentation ability.
Article
The coronavirus disease 2019 (COVID-19) epidemic had a significant impact on daily life in many nations and global public health. COVID's quick spread has become one of the biggest disruptive calamities in the world. In the fight against COVID-19, it's critical to keep a close eye on the initial stage of infection in patients. Furthermore, early COVID-19 discovery by precise diagnosis, especially in patients with no evident symptoms, may reduce the patient's death rate and can stop the spread of COVID-19. When compared to CT images, chest X-ray (CXR) images are now widely employed for COVID-19 diagnosis since CXR images contain more robust features of the lung. Furthermore, radiologists can easily diagnose CXR images because of its operating speed and low cost, and it is promising for emergency situations and therapy. This work proposes a tri-stage CXR image based COVID-19 classification model using deep learning convolutional neural networks (DLCNN) with an optimal feature selection technique named as enhanced grey-wolf optimizer with genetic algorithm (EGWO-GA), which is denoted as CXGNet. The proposed CXGNet is implemented as multiple classes, such as 4-class, 3-class, and 2-class models based on the diseases. Extensive simulation outcome discloses the superiority of the proposed CXGNet model with enhanced classification accuracy of 94.00% for the 4-class model, 97.05% of accuracy for the 3-class model, and 100% accuracy for the 2-class model as compared to conventional methods.
Conference Paper
A Novel Coronavirus (Sars-Cov-2) struck the world in December, 2019. First Detected in Wuhan, China: this acute respiratory syndrome has spread all over the world at the present moment and has been officially declared as a global pandemic. A massive detrimental effect on global health and economy has been noticed. While researchers are continuously in search of vaccines - detection and proper diagnosis of the virus is as important to limit the spread of the virus. Chest X-Rays (CXRs) is one of the most common types of radiology examination and CXRs of the infected patients can serve as a crucial step in detection of the virus. Having a computer aided automatic diagnosis can minimize human interactions, errors, and workload and maximize efficiency. Various studies have shown that use of artificial intelligence in detection of Covid-19 patients through their CXRs is strongly optimistic. In this paper, a robust and efficient computer aided detection system has been proposed for multiclass image classification of diseases like Covid-19 and Pneumonia using the CXRs of patients. The algorithms have currently achieved desired results which can be further improved when more CXR images are available. The proposed method has outperformed current state of the art algorithms and has achieved 98.3% accuracy with a precision metric of 0.94, and can be used as a fast and reliable preliminary test for detection of the virus.