We value your privacy
Approximately two million pediatric deaths occur every year due to Pneumonia. Detection and diagnosis of Pneumonia plays an important role in reducing these deaths. Chest radiography is one of the most commonly used modalities to detect pneumonia. In this paper, we propose a novel two-stage deep learning architecture to detect pneumonia and classify its type in chest radiographs. This architecture contains one network to classify images as either normal or pneumonic, and another deep learning network to classify the type as either bacterial or viral. In this paper, we study and compare the performance of various stage one networks such as AlexNet, ResNet, VGG16 and Inception-v3 for detection of pneumonia. For these networks, we employ transfer learning to exploit the wealth of information available from prior training. For the second stage, we find that transfer learning with these same networks tends to overfit the data. For this reason we propose a simpler CNN architecture for classification of pneumonic chest radiographs and show that it overcomes the overfitting problem. We further enhance the performance of our system in a novel way by incorporating lung segmentation using a U-Net architecture. We make use of a publicly available dataset comprising 5856 images (1583 – Normal, 4273 – Pneumonic). Among the pneumonia patients, 2780 patients are identified as bacteria type and the rest belongs to virus category. We test our proposed algorithm(s) on a set of 624 images and we achieve an area under the receiver operating characteristic curve of 0.996 for pneumonia detection. We also achieve an accuracy of 97.8% for classification of pneumonic chest radiographs thereby setting a new benchmark for both detection and diagnosis. We believe the proposed two-stage classification of chest radiographs for pneumonia detection and its diagnosis would enhance the workflow of radiologists.
Do you want to read the rest of this conference paper?
Despite advances in the acquisition of medical imaging and computer-aided support techniques, x-rays due to their low cost, high availability and low radiation levels are still an important diagnostic procedure, constituting the most frequently performed radiographic examination in pediatric patients for disease investigation while researchers are looking for increasingly efficient techniques to support decision-making. Emerging in the last decade as a viable alternative, deep learning (DL), a technique inspired by neuroscientific and neural connections, has gained much attention from researchers and made significant advances in the field of medical imaging, outperformed the state-of-art of many techniques, including those applied to pediatric chest radiography (PCXR). Given the scenario and considering the fact that, as far as we know, there is still no mapping study on the application of deep learning techniques in PCXR images, we propose in this article a "deep radiography" of the last decade in this research topic and a preliminary research agenda that deals with the state of the art of applying DL on PCXR that constitute a collaborative tool for future researchers. Our goal is to identify primary studies and support the process of choosing and developing DL techniques applied to PCXR images, in addition to pointing out gaps and trends by drawing up a preliminary research agenda. A protocol is described in each phase detailing criteria used from selection to extraction and our set of selected studies is subjected to careful analysis to respond to the research form. Six basic sources were used and the synthesis, results, limitations, and conclusions are exposed.
Variable N-Quoit filter applied for automatic detection of lung cancer by X-ray CT
Okumura, T., "Variable N-Quoit filter applied for automatic detection of lung cancer by X-ray CT", Computer
Assisted Radiology and Surgery (CAR 1998), 242-247 (1998).
Performance analysis of machine learning and deep learning architectures for malaria detection on cell images
B N Narayanan
R C Hardie
Narayanan, B. N., Ali R., & Hardie, R. C., "Performance analysis of machine learning and deep learning
architectures for malaria detection on cell images", Proc. SPIE 11139, Applications of Machine Learning,
111390W (2019). https://doi.org/10.1117/12.2524681 (accessed on September 11, 2019)
Learning deconvolution network for semantic segmentation
Noh, Hyeonwoo, Seunghoon Hong, and Bohyung Han. "Learning deconvolution network for semantic
segmentation." Proceedings of the IEEE international conference on computer vision. 2015.
M P Lungren
Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D., Bagul, A., Langlotz, C., Shpanskaya,
K., & Lungren, M. P., "Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning",
arXiv preprint arXiv: 1711.05225 (2018). Available online: https://arxiv.org/abs/1711.05225 (accessed on
August 18, 2019).
Pneumonia affects 7% of the global population, resulting in 2 million pediatric deaths every year. Chest X-ray (CXR) analysis is routinely performed to diagnose the disease. Computer-aided diagnostic (CADx) tools aim to supplement decision-making. These tools process the handcrafted and/or convolutional neural network (CNN) extracted image features for visual recognition. However, CNNs are perceived as black boxes since their performance lack explanations. This is a serious bottleneck in applications involving medical screening/diagnosis since poorly interpreted model behavior could adversely affect the clinical decision. In this study, we evaluate, visualize, and explain the performance of customized CNNs to detect pneumonia and further differentiate between bacterial and viral types in pediatric CXRs. We present a novel visualization strategy to localize the region of interest (ROI) that is considered relevant for model predictions across all the inputs that belong to an expected class. We statistically validate the models’ performance toward the underlying tasks. We observe that the customized VGG16 model achieves 96.2% and 93.6% accuracy in detecting the disease and distinguishing between bacterial and viral pneumonia respectively. The model outperforms the state-of-the-art in all performance metrics and demonstrates reduced bias and improved generalization.
The implementation of clinical-decision support algorithms for medical imaging faces challenges with reliability and interpretability. Here, we establish a diagnostic tool based on a deep-learning framework for the screening of patients with common treatable blinding retinal diseases. Our framework utilizes transfer learning, which trains a neural network with a fraction of the data of conventional approaches. Applying this approach to a dataset of optical coherence tomography images, we demonstrate performance comparable to that of human experts in classifying age-related macular degeneration and diabetic macular edema. We also provide a more transparent and interpretable diagnosis by highlighting the regions recognized by the neural network. We further demonstrate the general applicability of our AI system for diagnosis of pediatric pneumonia using chest X-ray images. This tool may ultimately aid in expediting the diagnosis and referral of these treatable conditions, thereby facilitating earlier treatment, resulting in improved clinical outcomes.
Download video (30MB)Help with mp4 files
We study the performance of a computer-aided detection (CAD) system for lung nodules in computed tomography (CT) as a function of slice thickness. In addition, we propose and compare three different training methodologies for utilizing nonhomogeneous thickness training data (i.e., composed of cases with different slice thicknesses). These methods are (1) aggregate training using the entire suite of data at their native thickness, (2) homogeneous subset training that uses only the subset of training data that matches each testing case, and (3) resampling all training and testing cases to a common thickness. We believe this study has important implications for how CT is acquired, processed, and stored. We make use of 192 CT cases acquired at a thickness of 1.25 mm and 283 cases at 2.5 mm. These data are from the publicly available Lung Nodule Analysis 2016 dataset. In our study, CAD performance at 2.5 mm is comparable with that at 1.25 mm and is much better than at higher thicknesses. Also, resampling all training and testing cases to 2.5 mm provides the best performance among the three training methods compared in terms of accuracy, memory consumption, and computational time.
This paper considers the task of thorax disease classification on chest X-ray images. Existing methods generally use the global image as input for network learning. Such a strategy is limited in two aspects. 1) A thorax disease usually happens in (small) localized areas which are disease specific. Training CNNs using global image may be affected by the (excessive) irrelevant noisy areas. 2) Due to the poor alignment of some CXR images, the existence of irregular borders hinders the network performance. In this paper, we address the above problems by proposing a three-branch attention guided convolution neural network (AG-CNN). AG-CNN 1) learns from disease-specific regions to avoid noise and improve alignment, 2) also integrates a global branch to compensate the lost discriminative cues by local branch. Specifically, we first learn a global CNN branch using global images. Then, guided by the attention heat map generated from the global branch, we inference a mask to crop a discriminative region from the global image. The local region is used for training a local CNN branch. Lastly, we concatenate the last pooling layers of both the global and local branches for fine-tuning the fusion branch. The Comprehensive experiment is conducted on the ChestX-ray14 dataset. We first report a strong global baseline producing an average AUC of 0.841 with ResNet-50 as backbone. After combining the local cues with the global information, AG-CNN improves the average AUC to 0.868. While DenseNet-121 is used, the average AUC achieves 0.871, which is a new state of the art in the community.
Early detection of pulmonary lung nodules plays a significant role in the diagnosis of lung cancer. Computed tomography (CT) and chest radiographs (CRs) are currently being used by radiologists to detect such nodules. In this paper, we present a novel cluster-based classifier architecture for lung nodule computer-aided detection systems in both modalities. We propose a novel optimized method of feature selection for both cluster and classifier components. For CRs, we make use of an independent database comprising of 160 cases with a total of 173 nodules for training purposes. Testing is implemented on a publicly available database created by the Standard Digital Image Database Project Team of the Scientific Committee of the Japanese Society of Radiological Technology (JRST). The JRST database comprises 154 CRs containing one radiologist-confirmed nodule in each. In this research, we exclude 14 cases from the JRST database that contain lung nodules in the retrocardiac and subdiaphragmatic regions of the lung. For CT scans, the analysis is based on threefold cross-validation performance on 107 cases from publicly available dataset created by Lung Image Database Consortium comprised of 280 nodules. Overall, with a specificity of 3 false positives per case/patient on average, we show a classifier performance boost of 7.7% for CRs and 5.0% for CT scans when compared to a single aggregate classifier architecture.
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 dif- ferent classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implemen- tation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called dropout that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry
Lung cancer is the leading cause of cancer death in the United States. It usually exhibits its presence with the formation of pulmonary nodules. Nodules are round or oval-shaped growth present in the lung. Computed Tomography (CT) scans are used by radiologists to detect such nodules. Computer Aided Detection (CAD) of such nodules would aid in providing a second opinion to the radiologists and would be of valuable help in lung cancer screening. In this research, we study various feature selection methods for the CAD system framework proposed in FlyerScan. Algorithmic steps of FlyerScan include (i) local contrast enhancement (ii) automated anatomical segmentation (iii) detection of potential nodule candidates (iv) feature computation & selection and (v) candidate classification. In this paper, we study the performance of the FlyerScan by implementing various classification methods such as linear, quadratic and Fischer linear discriminant classifier. This algorithm is implemented using a publicly available Lung Image Database Consortium — Image Database Resource Initiative (LIDC-IDRI) dataset. 107 cases from LIDC-IDRI are handpicked in particular for this paper and performance of the CAD system is studied based on 5 example cases of Automatic Nodule Detection (ANODE09) database. This research will aid in improving the nodule detection rate in CT scans, thereby enhancing a patient's chance of survival.
Convolutional networks are at the core of most stateof-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error and 17.3% top-1 error.
In this work we investigate the effect of the convolutional network depth on
its accuracy in the large-scale image recognition setting. Our main
contribution is a thorough evaluation of networks of increasing depth, which
shows that a significant improvement on the prior-art configurations can be
achieved by pushing the depth to 16-19 weight layers. These findings were the
basis of our ImageNet Challenge 2014 submission, where our team secured the
first and the second places in the localisation and classification tracks
The National Library of Medicine (NLM) is developing a digital chest x-ray (CXR) screening system for deployment in resource constrained communities and developing countries worldwide with a focus on early detection of tuberculosis. A critical component in the computer-aided diagnosis of digital CXRs is the automatic detection of the lung regions. In this paper, we present a non-rigid registration-driven robust lung segmentation method using image retrieval-based patient specific adaptive lung models that detects lung boundaries, surpassing state-of-the-art performance. The method consists of three main stages: (i) a content-based image retrieval approach for identifying training images (with masks) most similar to the patient CXR using a partial Radon transform and Bhattacharyya shape similarity measure, (ii) creating the initial patient-specific anatomical model of lung shape using SIFT-flow for deformable registration of training masks to the patient CXR, and (iii) extracting refined lung boundaries using a graph cuts optimization approach with a customized energy function. Our average accuracy of 95:4% on the public JSRT database is the highest among published results. A similar degree of accuracy of 94:1% and 91:7% on two new CXR datasets from Montgomery County, Maryland (USA) and India, respectively, demonstrates the robustness of our lung segmentation approach.
Tuberculosis is a major health threat in many regions of the world. Opportunistic infections in immunocompromised HIV/AIDS patients and multi-drug-resistant bacterial strains have exacerbated the problem, while diagnosing tuberculosis still remains a challenge. When left undiagnosed and thus untreated, mortality rates of patients with tuberculosis are high. Standard diagnostics still rely on methods developed in the last century. They are slow and often unreliable. In an effort to reduce the burden of the disease, this paper presents our automated approach for detecting tuberculosis in conventional posteroanterior chest radiographs. We first extract the lung region using a graph cut segmentation method. For this lung region, we compute a set of texture and shape features, which enable the x-rays to be classified as normal or abnormal using a binary classifier. We measure the performance of our system on two datasets: a set collected by the tuberculosis control program of our local county's health department in the United States, and a set collected by Shenzhen Hospital, China. The proposed computer-aided diagnostic system for TB screening, which is ready for field deployment, achieves a performance that approaches the performance of human experts. We achieve an area under the ROC curve (AUC) of 87% (78.3% accuracy) for the first set, and an AUC of 90% (84% accuracy) for the second set. For the first set, we compare our system performance with the performance of radiologists. When trying not to miss any positive cases, radiologists achieve an accuracy of about 82% on this set, and their false positive rate is about half of our system's rate.
INTRODUCTIONThe ability of multilayer back-propagation networks to learn complex, high-dimensional, nonlinearmappings from large collections of examples makes them obvious candidates for imagerecognition or speech recognition tasks (see PATTERN RECOGNITION AND NEURALNETWORKS). In the traditional model of pattern recognition, a hand-designed featureextractor gathers relevant information from the input and eliminates irrelevant variabilities.A trainable classifier then categorizes the...
The future of image processing and CAD in diagnostic radiology is more promising now than ever, with increasingly impressive results being reported from various observer performance studies in both mammography and chest radiography. Clinical trials in years to come will help optimize the accuracy of the programs and determine the actual contribution of CAD to the interpretation process. Radiologists using output from computer analyses of images, however, will still make the final decision regarding diagnosis and patient management. Nonetheless, studies have indicated that the computer output need not have greater overall accuracy than a given radiologist in order to improve his or her performance. A systematic and gradual introduction of CAD into radiology departments will be necessary so that radiologists can become familiar with the strengths and weaknesses of each CAD program, thereby avoiding either excessive reliance or a dismissive attitude toward the computer output. This should ensure the acceptance of CAD and optimal diagnostic performance by the radiologist. Thus, an appropriate role for each CAD program will be determined for each radiologist, according to his or her individual training and observational skills, reducing intraobserver variations and improving diagnostic performance.
Our objective was to evaluate the impact of a computer-aided diagnostic scheme on radiologists' interpretations of chest radiographs with interstitial opacities by performing an observer test using receiver operating characteristic (ROC) analysis.
Twenty chest radiographs with normal findings and 20 chest radiographs with abnormal findings were used. Each radiograph was divided into four quadrants. One hundred twenty-nine quadrants (80 normal and 49 abnormal quadrants) were used for testing because we excluded 31 equivocal quadrants. Sixteen independent observers (10 residents and six attending radiologists) participated in this study. The radiologists' performance without and with computer assistance, which indicated cases with normal and abnormal findings by various markers, was evaluated by ROC analysis.
The diagnostic accuracy of the observers improved by a statistically significant magnitude when computer-aided diagnosis was used. Thus, the values for the area under the ROC curve obtained with and without the computer-aided diagnostic output were .970 and .948 (p = .0002), respectively, for all observers; .969 and .943 (p = .0006), respectively, for the residents' subgroup; and .972 and .960 (p = .162), respectively, for the attending radiologists' subgroup. The value for the area under the ROC curve for the computerized scheme by itself was .943.
Our computer-aided diagnostic scheme can assist radiologists in the diagnosis or exclusion of interstitial disease on chest radiographs.
Chest radiography is still a useful examination in various situations, although CT has become a modality of choice as a diagnostic examination in many cases. Current computer-aided diagnosis (CAD) schemes for chest radiographs include nodule detection, interstitial disease detection, temporal subtraction, differential diagnosis of interstitial disease, and distinction between benign and malignant pulmonary nodules. All of these schemes are demonstrated as providing potentially useful tools for radiologists when the output of these schemes is used as a "second opinion." There are some commercially available products for these schemes and more are expected to be available in the near future. The current status of CAD for CT is also discussed briefly in this article.
This article presents a novel approach based on computer-aided diagnostic (CAD) scheme and wavelet transforms to aid pneumonia diagnosis in children, using chest radiograph images. The prototype system, named Pneumo-CAD, was designed to classify images into presence (PP) or absence of pneumonia (PA).
The knowledge database for the Pneumo-CAD comprised chest images confirmed as PP or PA by two radiologists trained to interpret chest radiographs according to the WHO guidelines for the diagnosis of pneumonia in children. The performance of the Pneumo-CAD was evaluated by a subset of images randomly selected from the knowledge database. The retrieval of similar images was made by feature extraction using wavelets transform coefficients of the image. The energy of the wavelet coefficients was used to compose the feature vector in order to support the computational classification of images as PP or PA. Methodology I worked with a rank-weighted 15-nearest-neighbour scheme, while methodology II employed a distance-dependent weighting for image classification. The performance of the prototype system was assessed by the ROC curve.
Overall, the Pneumo-CAD using the Haar wavelet presented the best accuracy in discriminating PP from PA for both, methodology I (AUC=0.97) and methodology II (AUC=0.94), reaching sensitivity of 100% and specificity of 80% and 90%, respectively.
Pneumo-CAD could represent a complementary tool to screen children with clinical suspicion of pneumonia, and so to contribute to gather information on the burden of-pneumonia estimates in order to help guide health policies toward preventive interventions.
A new computer aided detection (CAD) system is presented for the detection of pulmonary nodules on chest radiographs. Here, we present the details of the proposed algorithm and provide a performance analysis using a publicly available database to serve as a benchmark for future research efforts. All aspects of algorithm training were done using an independent dataset containing 167 chest radiographs with a total of 181 lung nodules. The publicly available test set was created by the Standard Digital Image Database Project Team of the Scientific Committee of the Japanese Society of Radiological Technology (JRST). The JRST dataset used here is comprised of 154 chest radiographs containing one radiologist confirmed nodule each (100 malignant cases, 54 benign cases). The CAD system uses an active shape model for anatomical segmentation. This is followed by a new weighted-multiscale convergence-index nodule candidate detector. A novel candidate segmentation algorithm is proposed that uses an adaptive distance-based threshold. A set of 114 features is computed for each candidate. A Fisher linear discriminant (FLD) classifier is used on a subset of 46 features to produce the final detections. Our results indicate that the system is able to detect 78.1% of the nodules in the JRST test set with and average of 4.0 false positives per image (excluding 14 cases containing lung nodules in retrocardiac and subdiaphragmatic regions of the lung).