Conference Paper

A Survey of Medical Image Classification Techniques

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Medical informatics is the field that combines two medical data sources: biomedical record and imaging data. Medical image data is formed by pixels that correspond to a part of a physical object and produced by imaging modalities. Exploration of medical image data methods is a challenge in the sense of getting their insight value, analyzing and diagnosing of a specific disease. Image classification plays an important role in computer-aided-diagnosis and is a big challenge on image analysis tasks. This challenge related to the use of methods and techniques in exploiting image processing result, pattern recognition result and classification methods and subsequently validating the image classification result into medical expert knowledge. The main objective of medical images classification is not only to reach high accuracy but also to identify which parts of human body are infected by the disease. This paper reviewed the state-of-the-art of image classification techniques to diagnose human body disease. The review covered identification of medical image classification techniques, image modalities used, the dataset and trade off for each technique. At the end, the review showed the improvement of image classification techniques such as to increase accuracy and sensitivity value, and to be feasible employed for computer-aided-diagnosis are a big challenge and an open research.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Medical classification of images is an exciting area of study because it combines the problems of diagnosis and analysis of medical images [1]. Data clustering is the process of grouping together sets of related data. ...
... Each modality has a unique quality and specific effect on the human body's structure and organs. [1] There are four imaging modalities: ...
... The x-ray image processing projection is used almost when lab tests become important to diagnose medical problems. Making an image comprises three main steps: pre-reading the image, reading the main image, and processing the image [1]. ...
Article
Full-text available
Imaging data and biological records are both included in the field of medical informatics. Medical image clustering is an important area of research that is getting increased interest in both academia and the health professions. It addresses the issues of medical diagnosis, analysis, and education. Several medical imaging methods and applications based on data mining have been made and tested to deal with these problems. This paper looks at how image classification techniques diagnose diseases in the human body. It concerns the imaging modalities, dataset, and the pros and cons of each method. An optimal clustering technique of medical images using multiwavelet transforms proposed that combines the multiwavelet transform filterbanks with the k-means clustering algorithm to improve the performance and get a clinically meaningful clustering shape. In comparison with other methods of clustering, it was shown that this method has a much higher cluster classification than those published before. A user-friendly Matlab program has been constructed to test and get the results of the proposed algorithms
... Classification helps to organize biomedical image databases into image categories before diagnostics [24][25][26][27][28][29][30]. Many investigations have been performed by researchers to improve classification for biomedical images [6,7,[31][32][33][34][35][36]. In 2016, Miranda et al. surveyed medical image classification techniques. ...
... In 2016, Miranda et al. surveyed medical image classification techniques. ey reviewed the state-of-the-art image classification techniques to diagnose human body disease and covered identification of medical image classification techniques, image modalities used, the dataset, and tradeoff for each technique [31]. ey concluded that artificial neural network (ANNs) classifier and SVM are the most used technique for image classification because these techniques give high accuracy, high sensitivity, high specificity, and high classification performance results [31]. ...
... ey reviewed the state-of-the-art image classification techniques to diagnose human body disease and covered identification of medical image classification techniques, image modalities used, the dataset, and tradeoff for each technique [31]. ey concluded that artificial neural network (ANNs) classifier and SVM are the most used technique for image classification because these techniques give high accuracy, high sensitivity, high specificity, and high classification performance results [31]. In the same logic, Jiang et al. in 2017 made an investigation on ML algorithms for healthcare [32]. ...
Article
Full-text available
In modern-day medicine, medical imaging has undergone immense advancements and can capture several biomedical images from patients. In the wake of this, to assist medical specialists, these images can be used and trained in an intelligent system in order to aid the determination of the different diseases that can be identified from analyzing these images. Classification plays an important role in this regard; it enhances the grouping of these images into categories of diseases and optimizes the next step of a computer-aided diagnosis system. The concept of classification in machine learning deals with the problem of identifying to which set of categories a new population belongs. When category membership is known, the classification is done on the basis of a training set of data containing observations. The goal of this paper is to perform a survey of classification algorithms for biomedical images. The paper then describes how these algorithms can be applied to a big data architecture by using the Spark framework. This paper further proposes the classification workflow based on the observed optimal algorithms, Support Vector Machine and Deep Learning as drawn from the literature. The algorithm for the feature extraction step during the classification process is presented and can be customized in all other steps of the proposed classification workflow.
... Medical images can be of different modalities based on the biomedical devices used to generate them. These modalities include Projectional Radiography (X-rays), Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Ultrasound Imaging [14]. X-ray is the most common medical imaging technique in which electromagnetic waves are used to create a two-dimensional representation of the body's internal structure based on the varying rates of wave absorption in tissues with different densities. ...
... Experimentation was performed using two datasets; a breast ultrasound (BRUS) dataset and an optical coherence tomography (OCT) dataset. The BRUS dataset contains 39,904 images, 22,026 of which are labeled as malignant (14,557) or benign (7469). The other 17,878 images are unlabeled. ...
Article
Full-text available
Training machine learning and deep learning models for medical image classification is a challenging task due to a lack of large, high-quality labeled datasets. As the labeling of medical images requires considerable time and effort from medical experts, models need to be specifically designed to train on low amounts of labeled data. Therefore, an application of semi-supervised learning (SSL) methods provides one potential solution. SSL methods use a combination of a small number of labeled datasets with a much larger number of unlabeled datasets to achieve successful predictions by leveraging the information gained through unsupervised learning to improve the supervised model. This paper provides a comprehensive survey of the latest SSL methods proposed for medical image classification tasks.
... In recent years, computer-based algorithms have played a significant role in biomedical for automatic disease detection [2]. As technological advancements in the field of medical imaging continue, researchers have looked into using various AIbased methods to detect diseases like pulmonary nodules, interstitial lung disease, and tuberculosis with chest X-rays [3]. ...
... Code implementation of the preprocessing phase along with proposed ResNet-DCGAN model are available at GitHub https://tinyurl.com/4u8u45j2.2 Raw images are available at https://tinyurl.com/jnbau6cc.3 ...
Chapter
Detection of severe diseases like COVID-19 using deep learning (DL) models is a very time-relevant subject to be focused on looking at the present scenario. However, there is always a problem regarding the availability of significant data for the training of DL-based classification models. In this work, a ResNet-DCGAN model is proposed to generate synthesized COVID-19 chest X-ray images to tackle the data scarcity problem. Here, some image processing techniques are applied to the publicly available COVID-19 chest X-ray images. Thereafter, a ResNet50 Deep Convolutional Neural Network (DCNN) model is incorporated as the discriminator to the proposed ResNet-DCGAN model. Moreover, in this work, to train the proposed model efficiently, the RAdam optimization algorithm is used instead of the earlier Adam optimization algorithm. The proposed ResNet-DCGAN model had the edge over the state-of-the-art DCGAN model.
... Four different applications were carried out during the testing phase, and the LR, SR obtained with the original SRGAN [24], HR, and SR images obtained with TSRGAN were evaluated separately. The confusion matrices [31] of the classification results obtained using these image groups are shown in Table 4. In addition, the comparison of accuracy rates for all image groups is given in Table 5. ...
... predicted to be unhealthy. The accuracy rates are obtained by dividing these two values by the total value [31,47]. ...
Article
Full-text available
Thermal imaging can be used in many sectors such as public security, health, and defense in image processing. However, thermal imaging systems are very costly, limiting their use, especially in the medical field. Also, thermal camera systems obtain blurry images with low levels of detail. Therefore, the need to improve their resolution has arisen. Here, super-resolution techniques can be a solution. Developments in deep learning in recent years have increased the success of super-resolution (SR) applications. This study proposes a new deep learning-based approach TSRGAN model for SR applications performed on a new dataset consisting of thermal images of premature babies. This dataset was created by downscaling the thermal images (ground truth) of premature babies as traditional SR studies. Thus, a dataset consisting of high-resolution (HR) and low-resolution (LR) thermal images were obtained. SR images created due to the applications were compared with LR, bicubic interpolation images, and obtained SR images using state-of-the-art models. The success of the results was evaluated using image quality metrics of peak signal to noise ratio (PSNR) and structural similarity index measure (SSIM). The results show that the proposed model achieved the second-best PSNR value and the best SSIM value. Additionally, a CNN-based classifier model was developed to perform task-based evaluation, and classification applications were carried out separately on LR, HR, and reconstructed SR image sets. Here, the success of classifying unhealthy and healthy babies was compared. This study showed that the classification accuracy of SR images increased by approximately 5% compared to the classification accuracy of LR images. In addition, the classification accuracy of SR thermal images approached the classification accuracy of HR thermal images by about 2%. Therefore, with the approach proposed in this study, it has been proven that LR thermal images can be used in classification applications by increasing their resolution. Thus, widespread use of thermal imaging systems with lower costs in the medical field will be achieved.
... One of the fundamental tasks of CAD is the aspect of image segmentation, the results of which can be used as key evidence in the pathologists' diagnostic processes. Along with the rapid development of medical image segmentation methodology, there is a wide demand for its application to identify benign and malignant tumors, tumor differentiation stages, and other related fields (35). Therefore, a multi-class image segmentation method is needed to obtain high segmentation accuracy and good robustness (36). ...
Article
Full-text available
Background and purpose Colorectal cancer is a common fatal malignancy, the fourth most common cancer in men, and the third most common cancer in women worldwide. Timely detection of cancer in its early stages is essential for treating the disease. Currently, there is a lack of datasets for histopathological image segmentation of colorectal cancer, which often hampers the assessment accuracy when computer technology is used to aid in diagnosis. Methods This present study provided a new publicly available Enteroscope Biopsy Histopathological Hematoxylin and Eosin Image Dataset for Image Segmentation Tasks (EBHI-Seg). To demonstrate the validity and extensiveness of EBHI-Seg, the experimental results for EBHI-Seg are evaluated using classical machine learning methods and deep learning methods. Results The experimental results showed that deep learning methods had a better image segmentation performance when utilizing EBHI-Seg. The maximum accuracy of the Dice evaluation metric for the classical machine learning method is 0.948, while the Dice evaluation metric for the deep learning method is 0.965. Conclusion This publicly available dataset contained 4,456 images of six types of tumor differentiation stages and the corresponding ground truth images. The dataset can provide researchers with new segmentation algorithms for medical diagnosis of colorectal cancer, which can be used in the clinical setting to help doctors and patients. EBHI-Seg is publicly available at: https://figshare.com/articles/dataset/EBHI-SEG/21540159/1 .
... QA systems have been extensively used in many fields, including retrieval systems [5,6] and medical image analysis [7,8]. In the field of medicine, a medical QA system, unlike other auxiliary clinical diagnosis systems [9], such as image segmentation [10], image classification [11], and surgical robots [12], can support the workload of performing radiographic image interpretation and pathological diagnosis simultaneously without dependence on expertise. Moreover, such a medical QA system would be helpful for patients to obtain reliable and accurate information even after treatment. ...
Article
Full-text available
Auxiliary clinical diagnosis has been researched to solve unevenly and insufficiently distributed clinical resources. However, auxiliary diagnosis is still dominated by human physicians, and how to make intelligent systems more involved in the diagnosis process is gradually becoming a concern. An interactive automated clinical diagnosis with a question-answering system and a question generation system can capture a patient’s conditions from multiple perspectives with less physician involvement by asking different questions to drive and guide the diagnosis. This clinical diagnosis process requires diverse information to evaluate a patient from different perspectives to obtain an accurate diagnosis. Recently proposed medical question generation systems have not considered diversity. Thus, we propose a diversity learning-based visual question generation model using a multi-latent space to generate informative question sets from medical images. The proposed method generates various questions by embedding visual and language information in different latent spaces, whose diversity is trained by our newly proposed loss. We have also added control over the categories of generated questions, making the generated questions directional. Furthermore, we use a new metric named similarity to accurately evaluate the proposed model’s performance. The experimental results on the Slake and VQA-RAD datasets demonstrate that the proposed method can generate questions with diverse information. Our model works with an answering model for interactive automated clinical diagnosis and generates datasets to replace the process of annotation that incurs huge labor costs.
... The deep neural network (DNN), especially the convolution neural network (CNN), is utilized extensively in altering image classification tasks and has attained remarkable results since 2012. Few studies on medical image classification using CNNs have attained performance rivaling human experts [7]. The medical image is difficult to gather since the labeling and collecting of clinical data are faced with data privacy concerns as well as the requirements for time consuming expert's explanation. ...
... Image classification has an integral role to play in the field of computer-aided-diagnosis as well. Medical image classification which is a part of computer-aided-diagnosis aims at achieving a high accuracy along with the identification of the parts of the human body which are infected by the disease [7]. ...
Article
This work elaborates on the integration of the rudimentary Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM), resulting in a new paradigm in the well-explored field of image classification. LSTM is one kind of Recurrent Neural Network (RNN) which has the potential to memorize long-term dependencies. It was observed that LSTMs are able to complement the feature extraction ability of CNN when used in a layered order. LSTMs have the capacity to selectively remember patterns for a long duration of time and CNNs are able to extract the important features out of it. This LSTM-CNN layered structure, when used for image classification, has an edge over conventional CNN classifier. The model which has been proposed is based on the sets of Artificial Neural Network like Recurrent and Convolutional neural network; hence this model is robust and suitable to a wide spectrum of classification tasks. To validate these results, we have tested our model on two standard datasets. The results have been compared with other classifiers to establish the significance of our proposed model.
... Among this, brain tumor is a most deadly disease that leads human life fatal. This research work concentrates on classifying brain tumor existence by using patients MRI images [1]. It solves the issues in segmenting normal and affected tissues in Magnetic Resonance Imaging by statistical and size based features are extracted using Gray Level Co-occurrence Matrix (GLCM). ...
Article
Cluster of tissue that influence the normal tissue by moderate expansion of irregular cells is known as Brain tumour. It happens when cell get anomalous development inside the brain and this remains the primary reason in increasing the fatality rate among humans. Among a wide range of cancers, brain tumour is incredibly serious and to avoid life threats of an individual, quick diagnosing and treatment must be given. Identifying these cells is a problematic issue, due to formation of tumour cells. It is exceptionally crucial to classify brain tumour from MRI image. In our proposed work, MRI images retrieved by utilizing Content based image retrieval technique are taken as input for classification. To achieve accuracy in classification and for efficient segmentation, pre-processing is carried for color conversion, noise reduction and resizing. Segmentation of tumour cells is done by Expectation and Maximization technique to know about region of affected cells present in that segmented area. This is trailed by statistical and size-based feature extraction by Gray Level Co-Occurrence Matrix from segmented images. The features extracted from segmented portion will be trained to analyze the presence of tumour in given MR images. The brain tumor classification, done in MATLAB environment allows localizing a mass of abnormal cells in a slice of MRI image using SVM Classifier, which is quickest method and furthermore give the great accuracy in classification. Experimental results accomplished accuracy of 100% in distinguishing the tissues as normal or abnormal from MR images exhibiting the viability of proposed method.
... Among this, brain tumor is a most deadly disease that leads human life fatal. This research work concentrates on classifying brain tumor existence by using patients MRI images [1]. It solves the issues in segmenting normal and affected tissues in Magnetic Resonance Imaging by statistical and size based features are extracted using Gray Level Co-occurrence Matrix (GLCM). ...
Article
Cluster of tissue that influence the normal tissue by moderate expansion of irregular cells is known as Brain tumour. It happens when cell get anomalous development inside the brain and this remains the primary reason in increasing the fatality rate among humans. Among a wide range of cancers, brain tumour is incredibly serious and to avoid life threats of an individual, quick diagnosing and treatment must be given. Identifying these cells is a problematic issue, due to formation of tumour cells. It is exceptionally crucial to classify brain tumour from MRI image. In our proposed work, MRI images retrieved by utilizing Content based image retrieval technique are taken as input for classification. To achieve accuracy in classification and for efficient segmentation, pre-processing is carried for color conversion, noise reduction and resizing. Segmentation of tumour cells is done by Expectation and Maximization technique to know about region of affected cells present in that segmented area. This is trailed by statistical and size-based feature extraction by Gray Level Co-Occurrence Matrix from segmented images. The features extracted from segmented portion will be trained to analyze the presence of tumour in given MR images. The brain tumor classification, done in MATLAB environment allows localizing a mass of abnormal cells in a slice of MRI image using SVM Classifier, which is quickest method and furthermore give the great accuracy in classification. Experimental results accomplished accuracy of 100% in distinguishing the tissues as normal or abnormal from MR images exhibiting the viability of proposed method.
... N merous medical imaging modalities include ionizing radiation, magnetic resonance, n clear medicine, optical methods, and ultrasound as the media. Each modular media h unique characteristics and responses to the human body's structure [64]. These modalit serve various purposes, such as obtaining images inside the human body or image sa ples of parts that cannot be seen with the naked eye [65]. ...
Article
Full-text available
Medical image processing and analysis techniques play a significant role in diagnosing diseases. Thus, during the last decade, several noteworthy improvements in medical diagnostics have been made based on medical image processing techniques. In this article, we reviewed articles published in the most important journals and conferences that used or proposed medical image analysis techniques to diagnose diseases. Starting from four scientific databases, we applied the PRISMA technique to efficiently process and refine articles until we obtained forty research articles published in the last five years (2017–2021) aimed at answering our research questions. The medical image processing and analysis approaches were identified, examined, and discussed, including preprocessing, segmentation, feature extraction, classification, evaluation metrics, and diagnosis techniques. This article also sheds light on machine learning and deep learning approaches. We also focused on the most important medical image processing techniques used in these articles to establish the best methodologies for future approaches, discussing the most efficient ones and proposing in this way a comprehensive reference source of methods of medical image processing and analysis that can be very useful in future medical diagnosis systems.
... Early methods for CAD include three stages: hand-craft feature extraction, feature selection, and classification 5 . These methods highly rely on the expertise of experienced experts and also are limited in performance. ...
Article
Full-text available
Laryngeal disease classification is a relatively hard task in medical image processing resulting from its complex structures and varying viewpoints in data collection. Some existing methods try to tackle this task via the convolutional neural network, but they more or less ignore the intrinsic difficulty differences among different input samples and suffer from high training complexity. In order to better resolve these problems, an end-to-end Hierarchical Dynamic Convolutional Network (HDCNet) is proposed, which can dynamically process the input samples based on their difficulty. For the easy-classified samples, the HDCNet processes them with a smaller resolution and a relatively small network, while the difficult samples are passed to a large network with a larger resolution for more accurate classification results. Furthermore, a Feature Reuse Module (FRM) is designed to transfer the features learned by the small network to the corresponding block in the deep network to enhance the overall performance of some rather complicated samples. To validate the effectiveness of the proposed HDCNet, comprehensive experiments are conducted on the public available laryngeal disease classification dataset and HDCNet provides superior performances compared with other current state-of-the-art methods.
... While the domain of image classification includes several other subcategories, like hyperspectral image classification [27,46] and medical imaging [55], in this paper, we focus on the case of natural image classification, which includes the most common benchmarks used in the settings considered in this paper. However, the structure of the proposed module could potentially be also used on other classification problems, adapting it to the new domains (Section 5.3). ...
Article
Full-text available
Deep neural networks are the driving force of the recent explosion of machine learning applications in everyday life. However, they usually require a lot of training data to work well, and they act as black-boxes, making predictions without any explanation about them. This paper presents Memory Wrap, a module (i.e, a set of layers) that can be added to deep learning models to improve their performance and interpretability in settings where few data are available. Memory Wrap adopts a sparse content-attention mechanism between the input and some memories of past training samples. We show that adding Memory Wrap to standard deep neural networks improves their performance when they learn from a limited set of data, and allows them to reach comparable performance when they learn from the full dataset. We discuss how the analysis of its structure and content-attention weights helps to get insights about its decision process and makes their predictions more interpretable, compared to the same networks without Memory Wrap. We test our approach on image classification tasks using several networks on three different datasets, namely CIFAR10, SVHN, and CINIC10.
... Before the widespread use of deep learning (DL)-based methods, extensive research was carried out using traditional computer vision (CV) methods, where features were engineered or discovered, extracted from images, and then used to train supervised machine learning (ML) algorithms [101] . However, it is safe to argue now that almost all research related to medical image classification, object recognition and medical image segmentation is currently driven by DL-based methods, especially in the last 10 years [90] . ...
Article
Full-text available
The recent development in the areas of deep learning and deep convolutional neural networks has significantly progressed and advanced the field of computer vision (CV) and image analysis and understanding. Complex tasks such as classifying and segmenting medical images and localising and recognising objects of interest have become much less challenging. This progress has the potential of accelerating research and deployment of multitudes of medical applications that utilise CV. However, in reality, there are limited practical examples being physically deployed into front-line health facilities. In this paper, we examine the current state of the art in CV as applied to the medical domain. We discuss the main challenges in CV and intelligent data-driven medical applications and suggest future directions to accelerate research, development, and deployment of CV applications in health practices. First, we critically review existing literature in the CV domain that addresses complex vision tasks, including: medical image classification; shape and object recognition from images; and medical segmentation. Second, we present an in-depth discussion of the various challenges that are considered barriers to accelerating research, development, and deployment of intelligent CV methods in real-life medical applications and hospitals. Finally, we conclude by discussing future directions.
... An important aspect of computer-aided analysis is image classification, the results of which can provide pathologists with important evidence in the process of histopathological diagnosis. With the development of medical image classification technology, there is an urgent need in the fields of identifying benign and malignant tumors, tumor differentiation stages, and tumor subtypes [12]. To this end, we need a multi-category colorectal cancer dataset to test various medical image classification methods to obtain high classification accuracy and good robustness [13]. ...
Preprint
Full-text available
Background and purpose: Colorectal cancer has become the third most common cancer worldwide, accounting for approximately 10% of cancer patients. Early detection of the disease is important for the treatment of colorectal cancer patients. Histopathological examination is the gold standard for screening colorectal cancer. However, the current lack of histopathological image datasets of colorectal cancer, especially enteroscope biopsies, hinders the accurate evaluation of computer-aided diagnosis techniques. Methods: A new publicly available Enteroscope Biopsy Histopathological H&E Image Dataset (EBHI) is published in this paper. To demonstrate the effectiveness of the EBHI dataset, we have utilized several machine learning, convolutional neural networks and novel transformer-based classifiers for experimentation and evaluation, using an image with a magnification of 200x. Results: Experimental results show that the deep learning method performs well on the EBHI dataset. Traditional machine learning methods achieve maximum accuracy of 76.02% and deep learning method achieves a maximum accuracy of 95.37%. Conclusion: To the best of our knowledge, EBHI is the first publicly available colorectal histopathology enteroscope biopsy dataset with four magnifications and five types of images of tumor differentiation stages, totaling 5532 images. We believe that EBHI could attract researchers to explore new classification algorithms for the automated diagnosis of colorectal cancer, which could help physicians and patients in clinical settings.
... Successful hand-crafted models in medical imaging include total-variation [42], non-local self-similarity [43], sparsity/structured sparsity [44], Markov-tree models on wavelet coefficients [45], and untrained neural networks [46]- [48]. These models have been extensively leveraged in medical domain for image segmentation [49], reconstruction [50], disease classification [51], enhancement [52], and anomaly detection [53] due to their interpretability with solid mathematical foundations and theoretical supports on the robustness, recovery, and complexity [54], [55]. Further, unlike deep learning-based approaches, they do not require large annotated medical imaging datasets for training. ...
Preprint
Full-text available
Following unprecedented success on the natural language tasks, Transformers have been successfully applied to several computer vision problems, achieving state-of-the-art results and prompting researchers to reconsider the supremacy of convolutional neural networks (CNNs) as {de facto} operators. Capitalizing on these advances in computer vision, the medical imaging field has also witnessed growing interest for Transformers that can capture global context compared to CNNs with local receptive fields. Inspired from this transition, in this survey, we attempt to provide a comprehensive review of the applications of Transformers in medical imaging covering various aspects, ranging from recently proposed architectural designs to unsolved issues. Specifically, we survey the use of Transformers in medical image segmentation, detection, classification, reconstruction, synthesis, registration, clinical report generation, and other tasks. In particular, for each of these applications, we develop taxonomy, identify application-specific challenges as well as provide insights to solve them, and highlight recent trends. Further, we provide a critical discussion of the field's current state as a whole, including the identification of key challenges, open problems, and outlining promising future directions. We hope this survey will ignite further interest in the community and provide researchers with an up-to-date reference regarding applications of Transformer models in medical imaging. Finally, to cope with the rapid development in this field, we intend to regularly update the relevant latest papers and their open-source implementations at \url{https://github.com/fahadshamshad/awesome-transformers-in-medical-imaging}.
... In the field of dis-tinguishing benign or malignant tumors, distinguishing the differentiation stage of tumors and distinguishing the subtype of cancer [12], the results of image classification methods can be used as an important reference for clinicians in diagnostic practice. Furthermore, with the development of medical image classification technology, the main purpose of this technology is to achieve high accuracy and have the high anti-interference ability [13,14]. Although the mainstream trend is to scan the whole-slide images for analysis, the actual work often encounters the actual situation of computer performance shortage, where the whole-slide images are usually cropped into many sub-size images for analysis. ...
Article
Background and objective Gastric cancer has turned out to be the fifth most common cancer globally, and early detection of gastric cancer is essential to save lives. Histopathological examination of gastric cancer is the gold standard for the diagnosis of gastric cancer. However, computer-aided diagnostic techniques are challenging to evaluate due to the scarcity of publicly available gastric histopathology image datasets. Methods In this paper, a noble publicly available Gastric Histopathology Sub-size Image Database (GasHisSDB) is published to identify classifiers’ performance. Specifically, two types of data are included: normal and abnormal, with a total of 245,196 tissue case images. In order to prove that the methods of different periods in the field of image classification have discrepancies on GasHisSDB, we select a variety of classifiers for evaluation. Seven classical machine learning classifiers, three Convolutional Neural Network classifiers, and a novel transformer-based classifier are selected for testing on image classification tasks. Results This study performed extensive experiments using traditional machine learning and deep learning methods to prove that the methods of different periods have discrepancies on GasHisSDB. Traditional machine learning achieved the best accuracy rate of 86.08% and a minimum of just 41.12%. The best accuracy of deep learning reached 96.47% and the lowest was 86.21%. Accuracy rates vary significantly across classifiers. Conclusions To the best of our knowledge, it is the first publicly available gastric cancer histopathology dataset containing a large number of images for weakly supervised learning. We believe that GasHisSDB can attract researchers to explore new algorithms for the automated diagnosis of gastric cancer, which can help physicians and patients in the clinical setting.
... The confusion matrices of the classification results obtained after the test phase are completed are shown in Table 3. It may differ the preferred metric according to applications [31]. In this implementation, the success of classifying unhealthy and healthy babies will be evaluated in the practice and there are equal numbers of observations for both classes. ...
Article
The thermal camera systems can be used in all kinds of applications that require the detection of heat change, but thermal imaging systems are highly costly systems. In recent years, developments in the field of deep learning have increased the success by obtaining quality results compared to traditional methods. In this paper, thermal images of neonates (healthy - unhealthy) obtained from a high-resolution thermal camera were used and these images were evaluated as high resolution (ground truth) images. Later, these thermal images were downscaled at 1/2, 1/4, 1/8 ratios, and three different datasets consisting of low-resolution images in different sizes were obtained. In this way, super-resolution applications have been carried out on the deep network model developed based on generative adversarial networks (GAN) by using three different datasets. The successful performance of the results was evaluated with PSNR (peak signal to noise ratio) and SSIM (structural similarity index measure). In addition, healthy - unhealthy classification application was carried out by means of a classifier network developed based on convolutional neural networks (CNN) to evaluate the super-resolution images obtained using different datasets. The obtained results show the importance of combining medical thermal imaging with super-resolution methods.
... In recent times deep learning-based neural network models have shown promising results in medical image classification and object detection. [8][9][10] Moreover, that is why convolutional neural network-based models have been extensively studied for ocular disease detection [9] [11][12]. ...
... In [8], a Gabor rotation-invariant LBP (MGRLBP) method is proposed for medical image classification. In [9], a review of methods for medical image classification is available. In this work, we are concerned about polyp classification, and hence literature study is confined only to this domain. ...
Article
Full-text available
In this paper, a method is proposed for colonic polyp classification which can perform a virtual biopsy for assessing the stage of malignancy in polyps. Geometry, texture, and colour of a polyp give sufficient cue of its nature. The proposed framework characterizes geometry or shape of a polyp by pyramid histogram of oriented gradient (PHOG) features. To encapsulate the texture of the polyp surface, a fractal weighted local binary pattern (FWLBP) descriptor is employed, which is robust to affine transformation. It is also partially robust to illumination variations which is generally encountered during endoscopy. The optimal feature fusion is done using a feature ranking algorithm based on fuzzy entropy. Finally, to evaluate the classification performance of the proposed model, kernel-based support vector machines (SVM) and RUSBoosted tree are used. Experimental results carried on two databases clearly indicate that the proposed method can be used in the colonoscopic polyps classification. The proposed method can give polyp classification accuracies of 90.12% and 84.1%, and AUC of 0.91 and 0.92 for publicly available database and our own database, respectively.
... Although there exist a number of reviews on deep learning methods on medical image analysis [4][5][6][7][8][9][10][11][12][13], most of them emphasize either on general deep learning techniques or on specific clinical applications. The most comprehensive review paper is the work of Litjens et al. published in 2017 [12]. ...
Article
Full-text available
Importance . With the booming growth of artificial intelligence (AI), especially the recent advancements of deep learning, utilizing advanced deep learning-based methods for medical image analysis has become an active research area both in medical industry and academia. This paper reviewed the recent progress of deep learning research in medical image analysis and clinical applications. It also discussed the existing problems in the field and provided possible solutions and future directions. Highlights . This paper reviewed the advancement of convolutional neural network-based techniques in clinical applications. More specifically, state-of-the-art clinical applications include four major human body systems: the nervous system, the cardiovascular system, the digestive system, and the skeletal system. Overall, according to the best available evidence, deep learning models performed well in medical image analysis, but what cannot be ignored are the algorithms derived from small-scale medical datasets impeding the clinical applicability. Future direction could include federated learning, benchmark dataset collection, and utilizing domain subject knowledge as priors. Conclusion . Recent advanced deep learning technologies have achieved great success in medical image analysis with high accuracy, efficiency, stability, and scalability. Technological advancements that can alleviate the high demands on high-quality large-scale datasets could be one of the future developments in this area.
... In the field of distinguishing benign or malignant tumors, distinguishing the differentiation stage of tumors and distinguishing the subtype of cancer [12], the results of image classification methods can be used as an important reference for clinicians in diagnostic practice. Furthermore, with the development of medical image classification technology, the main purpose of this technology is to achieve high accuracy and have the high anti-interference ability [13,14]. Although the mainstream trend is to scan the whole-slide images for analysis, the actual work often encounters the actual situation of computer performance shortage, where the whole-slide images are usually cropped into many sub-size images for analysis. ...
Preprint
Full-text available
GasHisSDB is a New Gastric Histopathology Subsize Image Database with a total of 245196 images. GasHisSDB is divided into 160*160 pixels sub-database, 120*120 pixels sub-database and 80*80 pixels sub-database. GasHisSDB is made to realize the function of valuating image classification. In order to prove that the methods of different periods in the field of image classification have discrepancies on GasHisSDB, we select a variety of classifiers for evaluation. Seven classical machine learning classifiers, three CNN classifiers and a novel transformer-based classifier are selected for testing on image classification tasks. GasHisSDB is available at the URL:https://github.com/NEUhwm/GasHisSDB.git.
... SVMs transform low-dimensional training samples to higher levels using kernel functions. These methods are usually known as the most effective methods with high accuracy in classification and work well with fewer training samples [32]. A large body of literature confirms that combining deep features and SVM could be a superior approach in pathology image classification [21], [41]. ...
Article
Full-text available
In recent times, the performance of computer-aided diagnosis systems in classification of malignancies has significantly improved. Search and retrieval methods are specifically important as they assist physicians in making the right diagnosis in medical imaging owing to their ability of obtaining similar cases for a query image. Supervised classification algorithms are generally more accurate than unsupervised search-based classifications; however, the latter may more easily provide insights into the decision-making process by providing a group of similar cases and their corresponding metadata (i.e., diagnostic reports) and not simply a class probability. In this study, we propose a class-aware search operating on deep image embeddings to increase the accuracy of content-based search. We validate our methodology using two different publicly available datasets, one containing endometrial cancer images and the other containing colorectal cancer images. The proposed class-aware scenarios can enhance the accuracy of the search-based classifier, thereby making them more feasible in practice. With search results providing access to the metadata of retrieved cases (i.e., pathology reports of evidently diagnosed cases), such a combination has clear benefits for assisting experts with explainable results.
... Eka Miranda et al. [14] have surveyed various medical image classification techniques. The data acquisition includes an X-ray, CT scan, MRI and ultrasound Imaging. ...
Chapter
Full-text available
In the present day scenario, where huge volumes of data are being generated from various sources, as such storing and processing these data using traditional systems is a big challenge. The majority of data is of unstructured; hence necessary architectures should be designed to meet the continuous challenges. Among the possible solutions for the big data problem, one of the best solutions to address the huge volumes of unstructured data was Hadoop. In the medical field, huge volumes of clinical image data are resulting from the respective hardware tools. The necessary methods that are required to store, analyze, process and classification of these medical images can be done with map-reduce architecture using the Hadoop framework thereby reduces the computational time for the overall processing as the mapper will perform parallel processing. This paper includes a detailed review of Hadoop and its components. The main motive of this work is to deal with the medical image data using an efficient architecture such that automatic clustering or classification of images will be done within the architecture itself. The clustering of these medical images for future predictions and diagnosis for the disease is essential. In the map-reduce architecture, along with the map and reduce phases, the usage of combiners and partitioners will improve the efficiency of medical image processing for clustering the image data. The other responsibilities of this paper are to review the recent works in the image data clustering along with the state of art techniques for image classification. The clustered medical images will be used for automatic predictions and diagnosis of various patient diseases by applying Convolution Neural Network (CNN) techniques on top of the clustered or classified images.
Chapter
In this chapter, a review of the machine learning (ML) and pattern recognition concepts is given, and basic ML techniques (supervised, unsupervised, and reinforcement learning) are described. Also, a brief history of ML development from the primary works before the 1950s (including Bayesian theory) up to the most recent approaches (including deep learning) is presented. Then, an introduction to the support vector machine (SVM) with a geometric interpretation is given, and its basic concepts and formulations are described. A history of SVM progress (from Vapnik’s primary works in the 1960s up to now) is also reviewed. Finally, various ML applications of SVM in several fields such as medical, text classification, and image classification are presented.KeywordsMachine leaningPattern recognitionSupport vector machineHistory
Article
Thermal imaging systems are harmless to human health and enable contactless heat measurements. The thermal cameras are used in many public sectors where it is necessary to detect the change of temperature values. However, thermal cameras are costly and produce images with low edge information. This situation prevents the widespread use of thermal cameras. Therefore, in recent years, the researches to advance the quality of thermal images have increased. Within the scope of the studies in this paper, first of all, three different datasets consisting of thermal images in the colourful format of neonates were created. Also, TSRGAN+ deep network model was presented for super-resolution studies. The super-resolution images obtained visually approached ground truth images to a great extent. In addition, these results were compared using the peak signal to noise ratio (PSNR) and the structural similarity index measure (SSIM) image quality metrics. The proposed model showed a superior success in terms of the values of PSNR and SSIM compared to the state-of-the-art models. Here, the PSNR value of the proposed TSRGAN+ model increased by 1–1.5 dB compared to the TSRGAN network architecture, while the SSIM value increased by around 2–3%. Finally, the unhealthy-healthy image classification applications were performed on all thermal image sets in order to implement both the task-based evaluation and a real-life application. Thus, a new method is presented to evaluate the results of the super-resolution studies. Here, firstly, a CNN-based classifier was designed and the classification metrics were obtained for all three datasets. Then, transfer learning was applied using state-of-the-art models (ResNet101, Xception) to increase classification success. Here, the most successful results were obtained in applications using the ResNet101 model. Also, the developed model outperformed the TSRGAN, which achieved the second-best result. When all the obtained results are evaluated, it has been observed that the super-resolution models increase the success of unhealthy-healthy classification by about 10% compared to the low resolution images. In other words, the effects of super-resolution techniques on classification applications are clearly seen. In summary, the logical use of the super-resolution research will enable the common use of low-cost thermal cameras in the application fields such as medicine.
Conference Paper
Machine learning can assist in the difficult work of extracting meaningful information from the seemingly useless data produced by IoT devices (ML). The careful deployment of hybrid technologies has reaped benefits for a wide range of institutions, including businesses, governments, schools, and hospitals. The Internet of Things (IoT) may use machine learning (ML) to identify previously hidden patterns in large volumes of data in order to create accurate forecasts and recommendations. The Internet of Things (IoT) and machine learning (ML) are being applied in the field of medicine in order to automate the process of creating medical records, predicting illness diagnoses, and, most importantly, continuously monitoring patients. On different datasets, different machine learning algorithms achieve differing degrees of success. The numerous predictions may or may not end up having an effect on the eventual result. The degree to which the results differ from one another plays a crucial part in the therapeutic decision-making process. The healthcare industry relies significantly on a variety of ML algorithms in order to successfully manage the data generated by IoT devices. In this post, we are going to talk about how popular machine learning techniques can be used in the field of medicine for categorization and prediction purposes. The objective of this study is to provide evidence that utilizing a more sophisticated ML model for the analysis of IoT health data is beneficial. After a substantial amount of time spent on the matter, we came to the realization that a number of well-known ML prediction algorithms have significant weaknesses. The type of Internet of Things dataset that is being utilized will determine the technique that will be most effective when attempting to anticipate vital health data. The paper demonstrates a number of the ways in which the Internet of Things and machine learning have affected the delivery of healthcare in a variety of settings.
Article
As global cancer, gastric cancer is a severe threat to the health of people all over the world. In China, young people easily misdiagnose gastric cancer, and its misdiagnosis rate can be as high as 27%. To improve the accuracy and efficiency of gastric cancer detection and the fit goodness of the convolutional neural network, we propose a multidimensional convolutional lightweight network, named MCLNet, based on ShuffleNetV2. ShuffleNetV2 is a model with low computational complexity, memory consumption, and high GPU parallelism. However, ShuffleNetV2 has too few convolutional layers, only two-dimensional convolution, resulting in insufficient extracted features is not sufficient. Therefore, we consider the association between pixels of the same category in an image. To tap this association, we one-dimensionalize the image and introduce one-dimensional convolution. Since one-dimensional convolution can extract the association of words in a sentence, applying it to images can extract the association between image elements. Adding one-dimensional convolution expands the information exchange between channels and enriches the information, which complements the lack of two-dimensional convolution in global feature extraction. In addition, we compare the proposed MCLNet with the state-of-the-art (SOTA) method and illustrate the best results of the proposed MCLNet model through experiments.
Article
Medical image analysis for perfect diagnosis of disease has become a very challenging task. Due to improper diagnosis, required medical treatment may be skipped. Proper diagnosis is needed as suspected lesions could be missed by the physician's eye. Hence, this problem can be settled up by better means with the investigation of similar case studies present in the healthcare database. In this context, this paper substantiates an assistive system that would help dermatologists for accurate identification of 23 different kinds of melanoma. For this, 2300 dermoscopic images were used to train the skin-melanoma similar image search system. The proposed system uses feature extraction by assigning dynamic weights to the low-level features based on the individual characteristics of the searched images. Optimal weights are obtained by the newly proposed optimized pair-wise comparison (OPWC) approach. The uniqueness of the proposed approach is that it provides the dynamic weights to the features of the searched image instead of applying static weights. The proposed approach is supported by analytic hierarchy process (AHP) and meta-heuristic optimization algorithms such as particle swarm optimization (PSO), JAYA, genetic algorithm (GA), and gray wolf optimization (GWO). The proposed approach has been tested with images of 23 classes of melanoma and achieved significant precision and recall. Thus, this approach of skin melanoma image search can be used as an expert assistive system to help dermatologists/physicians for accurate identification of different types of melanomas.
Article
Due to the wide range of diseases and imaging modalities, a retrieving system is a challenging task to access the corresponding clinical cases from a large medical repository on time. Several computer-aided systems (CADx) are developed to recognize medical imaging modalities (MIM) based on various standard machine learning (SML) and advanced deep learning (DL) algorithms. Pre-trained models like convolutional neural networks (CNN) are used in the past as a transfer learning (TL) architecture. However, it is a challenging task to use these pre-trained models for some unseen datasets with a different domain of features. To classify different medical images, the relevant features with a robust classifier are needed and still, it is unsolved task due to MIM-based features. In this paper, a hybrid MIM-based classification system is developed by integrating the pre-trained VGG-19 and ResNet34 models into the original CNN model. Next, the MIM-DTL model is fine-tuned by updating the weights of new layers as well as weights of original CNN layers. The performance of MIM-DTL is compared with state-of-the-art systems based on cancer imaging archive (TCIA), Kvasir and lower extremity radiographs (LERA) datasets in terms of statistical measures such as accuracy (ACC), sensitivity (SE) and specificity (SP). On average, the MIM-DTL model achieved 99% of ACC, SE of 97.5% and SP of 98% along with smaller epochs compare to other TL. The experimental results show that the MIM-DTL model is outperformed to recognize medical imaging modalities and helps the healthcare experts to identify relevant diseases.
Article
Full-text available
Due to rapid changing in human lifestyles, a set of biological factors of human lives has changed, making people more vulnerable to certain diseases such as stroke. Stroke is a life-threatening disease leading to a long-term disability. It’s now a leading cause of death all over the word. As well as it’s the second leading cause of death after ischemic heart disease in Jordan. Stroke detection within the first few hours improves the chances to prevent complications and improve health care and management of patients. In this study we used patient’s information that are believed to be related to the cause of stroke and applied machine learning techniques such as Naive Bayes, Decision Tree, and KNN to predict stroke. Orange software is used to automatically process data and generate data mining model that can be used by health care professionals to predict stroke disease and give better treatment plan. Results show that decision tree classifier outperformed other techniques with accuracy level of 94.2%.
Chapter
Medical image analysis using deep learning algorithm (CNN) area is of foremost significance and maybe it is anything but a high need area. A great deal of concern is as yet needed in this area. To be sure, in larger part of cases, the medical information is deciphered by human master, whilst examination of medical information is very demanding and muddled errand. Regularly, discrete examination is performed by various human specialists which brings about mistaken identification of illness. The astounding exhibition of deep learning (DL) in various domains pulled in the specialists to apply this method inside the domain of medical area as DL gives incredible exactness and precision in conclusive yield. Hence, it has been imagined as a centre strategy for medical image analysis using deep learning algorithm (CNN) and in different floods of the medical services area. Further, division measure is the basic, viable and centre advance of the medical image investigation. Scientists are reliably endeavouring to increase the precision of medical image examination. In ongoing past, machine knowledge-based strategies have generally been utilised for this pursuit. The new pattern in the domain of medical image examination is the ramifications of profound learning-based methodologies. Maybe, the use of profound learning improves the prescient correctness of the individual. Additionally, it likewise mitigates the intercession of human specialists in the analysis marvel. This paper aims to survey on medical image analysis using deep learning algorithm convolutional neural network (CNN). The use of deep learning algorithms for medical image analysis is very significant since it is producing enough and reliable results comparing to human tasks; it also reduces human work and time; basing on all these aspects, a survey is about to done. In this paper, besides medical image analysis, convolutional neural network (CNN) architectural implementation and its features are also discussed.KeywordsImage analysisConvolutional neural networksClassificationSegmentation
Chapter
Full-text available
This study creates a new and simplified method for selecting the suitable site for building wind turbines, using standard power factor and power curves. The electrical energy generated from wind energy be influenced by on the physical characters of the wind site and the factors of the wind turbine; thus, the matching of the turbine with the site depends on determining the parameters of the optimum speed of the turbine, which is estimated from the performance index (PI) curve. This indicator is a new rating parameter, obtained from the highest value of the standard power and capacitance curves. The relationship between the three indices is plotted against the rated wind speed of a specific value of the Weibull shape parameter of the location. Thus, a more skillful method was used for Weibull parameters evaluation which is called equivalent energy method (EEM).KeywordsWeibull distribution functionCapacity factorNormalized powerPerformance index
Chapter
Full-text available
Actionable insights and learning from a highly complex biomedical dataset is a key challenge in smart healthcare. Traditional data processing algorithms fails to provide the better results with complex data. Recent advancements in artificial intelligence methods introduced an end to end complex learning models called deep neural networks often referred as deep learning models. In this chapter, we reviewed recent advancements of deep learning models and its applications related to healthcare. We also discussed the challenges and opportunities faced by the deep learning models.
Chapter
At present, the analysis and diagnosis of a particular disease is a big challenge for doctors. So, to get the prior information regarding the internal anatomical structure of human organs or tissue, different imaging modalities techniques are used to capture the medical image which is represented pixel by pixel. Due to the large volume of data in the image dataset, it is more difficult to analyze. In this study, a different sequence of operations on the medical image such as pre-processing, feature extraction, feature selection, existing classification techniques with pros and cons are studied and compared. Finally, an improvement of classification techniques in terms of efficiency, accuracy is summarized which will be helpful for a researcher working in this field.KeywordsMedical imageImage classificationFeature extractionDisease diagnosisArtificial intelligence
Chapter
Full-text available
The purpose of this chapter is to discuss the role of Artificial Intelligence (AI) in medical platform, especially to create a device that performs the same as a human organ does. We are going to discuss AI-driven healthcare devices that use machine learning, deep learning, and natural language processing, analyzing methods of clinical data in the form of text and images. Different platforms such as BioXcel Therapeutics, BERG, XtalPi's ID4, Deep Genomics, IBM's Watson, and Google's DeepMind Health, and techniques reduce the mortality rate in complicated medical diagnosis and surgeries. Finally, the challenges and risks of an automated clinical system will be explained.
Thesis
Mikroskopaufnahmen kompletter Organismen und ihrer Entwicklung ermöglichen die Erforschung ganzer Organismen oder Systeme und erzeugen Datensätze im Terabyte-Bereich. Solche großen Datensätze erfordern die Entwicklung von Computer-Vision-Tools, um Aufgaben wie Erkennung, Segmentierung, Klassifizierung und Registrierung durchzuführen. Es ist wünschenswert, Computer-Vision-Tools zu entwickeln, die nur eine minimale Menge an manuell annotierten Trainingsdaten benötigen. Ich demonstriere derartige Anwendungen in drei Projekte. Zunächst stelle ich ein Tool zur automatischen Registrierung von Drosophila-Flügeln (verschiedener Spezies) unter Verwendung von Landmarkenerkennung vor, das für die Untersuchung der Funktionsweise von Enhancern eingesetzt wird. Ich vergleiche die Leistung eines Shape-Model-Ansatzes mit der eines kleinen neuronalen Netz bei der Verfügbarkeit von nur 20 Trainingsbeispiele. Beide Methoden schneiden gut ab und ermöglichen eine präzise Registrierung von Tausenden von Flügeln. Das zweite Projekt ist ein hochauflösendes Zellkernmodell des C. elegans, das aus einem nanometeraufgelösten Elektronenmikroskopiedatensatz einer ganzen Dauerlarve erstellt wird. Diese Arbeit ist der erste Atlas der Dauerdiapause von C. elegans, der jemals erstellt wurde, und enthüllt die Anzahl der Zellkerne in diesem Stadium. Schließlich stelle ich eine Bildanalysepipeline vor, an der ich zusammen mit Laura Breimann und anderen gearbeitet habe. Die Pipeline umfasst die Punkterkennung von Einzelmolekül-Fluoreszenz-In-situ-Hybridisierung (smFISH), die Segmentierung von Objekten und die Vorhersage des Embryonalstadiums. Mit diesen drei Beispielen demonstriere ich sowohl generische Ansätze zur computergestützten Modellierung von Modellorganismen als auch maßgeschneiderte Lösungen für spezifische Probleme und die Verschiebung des Feldes in Richtung Deep-Learning.
Chapter
Cancer is a disease caused by uncontrolled division of cells other than normal body cells in any part of the body. It is one of the most dreadful disease affecting the whole world, which puts the demand to develop new and advanced diagnostic techniques. Medical imaging has digitalized and benefited the world by not only bringing reduction in diagnostic time, labor and cost but also making accurate early detection possible. This field is revolutionized by innovations and advancements in medical devices, standardized protocols and detailed image analysis. A detailed assessment of malignancy, to verify composition of cancerous cells, is now possible with adoption of digital analysis. The in‐depth analysis of images reveals initiation, propagation and transition of cancer from benign to malignant. The use of Artificial Intelligence can also prove beneficial as it can assess the information for evidence of opacities that will ultimately detect cancerous cells and will further make the treatment quick and accurate. Traditional diagnostic approaches are not reliable to detect cancers located remotely, which is now possible by using advanced imaging techniques. This book chapter will focus on deep insight of cancer studies used traditionally and modern practices in medical image analysis of it.
Article
Full-text available
Osteoarthritis is the most common form of “Arthritis & Joint disease”. Osteoarthritis (OA) is one of the fundamental causes of older and overweight individual’s sickness. It is the main cause of disability in adults. Mostly this disease occurs in people above 45 years of age, in which women suffer more as compared to men. it is basically damaged the Cartilage, because of which bones rub each other causing intense pain and inflammation. this gets thick and makes spurs at the edges. The knee Osteoarthritis is of 4 grades according to X-ray. The first 2 grade and 3rd grade can be recovered with the help of therapy and medications, while the 4th grade is necessary for knee replacement. The emerging Osteoarthritis management approach involves clinical evaluation & diagnostic imaging techniques. Within this research, we explore descriptively and objectively the various medical imaging methods used to diagnose and identify knee osteoarthritis. We study on the automatically detection of recovery rate of human disease and classify Osteoarthritis in the knee from medical images (like Magnetic Resonance image, CT scan, X-ray) from various medical image classification procedures. This paper provides a study that focuses on the various medical imaging methods used to determine osteoarthritis.
Article
Full-text available
SPECT nuclear medicine imaging is widely used for treating, diagnosing, evaluating and preventing various serious diseases. The automated classification of medical images is becoming increasingly important in developing computer-aided diagnosis systems. Deep learning, particularly for the convolutional neural networks, has been widely applied to the classification of medical images. In order to reliably classify SPECT bone images for the automated diagnosis of metastasis on which the SPECT imaging solely focuses, in this paper, we present several deep classifiers based on the deep networks. Specifically, original SPECT images are cropped to extract the thoracic region, followed by a geometric transformation that contributes to augment the original data. We then construct deep classifiers based on the widely used deep networks including VGG, ResNet and DenseNet by fine-tuning their parameters and structures or self-defining new network structures. Experiments on a set of real-world SPECT bone images show that the proposed classifiers perform well in identifying bone metastasis with SPECT imaging. It achieves 0.9807, 0.9900, 0.9830, 0.9890, 0.9802 and 0.9933 for accuracy, precision, recall, specificity, F-1 score and AUC, respectively, on the test samples from the augmented dataset without normalization.
Article
This paper describes a new hybrid approach, based on modular artificial neural networks with fuzzy logic integration, for the diagnosis of pulmonary diseases such as pneumonia and lung nodules. In particular, the proposed approach analyzes medical images, which are digitized chest X-rays, focusing on a classification method based on descriptors, such as grayscale histogram features, gray-level co-occurrence matrix (GLCM) texture-based features, and local binary pattern texture features. Then, to perform feature reduction, a multi-objective genetic algorithm is used to obtain an optimized neuro-fuzzy classifier, which is able to classify the pathology found in the analyzed chest X-ray. The main contribution of this paper is the proposed modular neural network approach, which divides features to achieve specialized analysis in the modules for digital image analysis and classification. The proposed approach achieves high classification accuracy after evaluating the neuro-fuzzy model with three large datasets of chest X-rays.
Article
Full-text available
Automated diagnosis of skin cancer is an important area of research that had different automated learning methods proposed so far. However, models based on insufficient labeled training data can badly influence the diagnosis results if there is no advising and semi supervising capability in the model to add unlabeled data in the training set to get sufficient information. This paper proposes a semi-advised support vector machine based classification algorithm that can be trained using labeled data together with abundant unlabeled data. Adaptive differential evolution based algorithm is used for feature selection. For experimental analysis two type of skin cancer datasets are used, one is based on digital dermoscopic images and other is based on histopathological images. The proposed model provided quite convincing results on both the datasets, when compared with respective state-of-the art methods used for feature selection and classification phase.
Conference Paper
Full-text available
Pediatric cardiomyopathies are a rare, yet heterogeneous group of pathologies of the myocardium that are routinely examined clinically using Cardiovascular Magnetic Resonance Imaging (cMRI). This gold standard powerful non-invasive tool yields high resolution temporal images that characterize myocardial tissue. The complexities associated with the annotation of images and extraction of markers, necessitate the development of efficient workflows to acquire, manage and transform this data into actionable knowledge for patient care to reduce mortality and morbidity. We develop and test a novel informatics framework called cMRI-BED for biomarker extraction and discovery from such complex pediatric cMRI data that includes the use of a suite of tools for image processing, marker extraction and predictive modeling. We applied our workflow to obtain and analyze a dataset of 83 de-identified cases and controls containing cMRI-derived biomarkers for classifying positive versus negative findings of cardiomyopathy in children. Bayesian rule learning (BRL) methods were applied to derive understandable models in the form of propositional rules with posterior probabilities pertaining to their validity. Popular machine learning methods in the WEKA data mining toolkit were applied using default parameters to assess cross-validation performance of this dataset using accuracy and percentage area under ROC curve (AUC) measures. The best 10-fold cross validation predictive performance obtained on this cMRI-derived biomarker dataset was 80.72% accuracy and 79.6% AUC by a BRL decision tree model, which is promising from this type of rare data. Moreover, we were able to verify that mycocardial delayed enhancement (MDE) status, which is known to be an important qualitative factor in the classification of cardiomyopathies, is picked up by our rule models as an important variable for prediction. Preliminary results show the feasibility of our framework for processing such data while also yielding actionable predictive classification rules that can augment knowledge conveyed in cardiac radiology outcome reports. Interactions between MDE status and other cMRI parameters that are depicted in our rules warrant further investigation and validation. Predictive rules learned from cMRI data to classify positive and negative findings of cardiomyopathy can enhance scientific understanding of the underlying interactions among imaging-derived parameters.
Article
Full-text available
Prostate cancer is the most common form of cancer and the second leading cause of cancer death in North America. Auto-detection of prostate cancer can play a major role in early detection of prostate cancer, which has a significant impact on patient survival rates. While multi-parametric magnetic resonance imaging (MP-MRI) has shown promise in diagnosis of prostate cancer, the existing auto-detection algorithms do not take advantage of abundance of data available in MP-MRI to improve detection accuracy. The goal of this research was to design a radiomics-based auto-detection method for prostate cancer via utilizing MP-MRI data. In this work, we present new MP-MRI texture feature models for radiomics-driven detection of prostate cancer. In addition to commonly used non-invasive imaging sequences in conventional MP-MRI, namely T2-weighted MRI (T2w) and diffusion-weighted imaging (DWI), our proposed MP-MRI texture feature models incorporate computed high-b DWI (CHB-DWI) and a new diffusion imaging modality called correlated diffusion imaging (CDI). Moreover, the proposed texture feature models incorporate features from individual b-value images. A comprehensive set of texture features was calculated for both the conventional MP-MRI and new MP-MRI texture feature models. We performed feature selection analysis for each individual modality and then combined best features from each modality to construct the optimized texture feature models. The performance of the proposed MP-MRI texture feature models was evaluated via leave-one-patient-out cross-validation using a support vector machine (SVM) classifier trained on 40,975 cancerous and healthy tissue samples obtained from real clinical MP-MRI datasets. The proposed MP-MRI texture feature models outperformed the conventional model (i.e., T2w+DWI) with regard to cancer detection accuracy. Comprehensive texture feature models were developed for improved radiomics-driven detection of prostate cancer using MP-MRI. Using a comprehensive set of texture features and a feature selection method, optimal texture feature models were constructed that improved the prostate cancer auto-detection significantly compared to conventional MP-MRI texture feature models.
Article
Full-text available
Background Brain segmentation in magnetic resonance images (MRI) is an important stage in clinical studies for different issues such as diagnosis, analysis, 3-D visualizations for treatment and surgical planning. MR Image segmentation remains a challenging problem in spite of different existing artifacts such as noise, bias field, partial volume effects and complexity of the images. Some of the automatic brain segmentation techniques are complex and some of them are not sufficiently accurate for certain applications. The goal of this paper is proposing an algorithm that is more accurate and less complex).Methods In this paper we present a simple and more accurate automated technique for brain segmentation into White Matter, Gray Matter and Cerebrospinal fluid (CSF) in three-dimensional MR images. The algorithm¿s three steps are histogram based segmentation, feature extraction and final classification using SVM. The integrated algorithm has more accurate results than what can be obtained with its individual components. To produce much more efficient segmentation method our framework captures different types of features in each step that are of special importance for MRI, i.e., distributions of tissue intensities, textural features, and relationship with neighboring voxels or spatial features.ResultsOur method has been validated on real images and simulated data, with desirable performance in the presence of noise and intensity inhomogeneities.Conclusions The experimental results demonstrate that our proposed method is a simple and accurate technique to define brain tissues with high reproducibility in comparison with other techniques.Virtual SlidesThe virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/13000_2014_207.
Article
Full-text available
Functional magnetic resonance imaging (fMRI) analysis is commonly done with cross-correlation analysis (CCA) and the General Linear Model (GLM). Both CCA and GLM techniques, however, typically perform calculations on a per-voxel basis and do not consider relationships neighboring voxels may have. Clustered voxel analyses have then been developed to improve fMRI signal detections by taking advantages of relationships of neighboring voxels. Mean-shift clustering (MSC) is another technique which takes into account properties of neighboring voxels and can be considered for enhancing fMRI activation detection. This study examines the adoption of MSC to fMRI analysis. MSC was applied to a Statistical Parameter Image generated with the CCA technique on both simulated and real fMRI data. The MSC technique was then compared with CCA and CCA plus cluster analysis. A range of kernel sizes were used to examine how the technique behaves. Receiver Operating Characteristic curves shows an improvement over CCA and Cluster analysis. False positive rates are lower with the proposed technique. MSC allows the use of a low intensity threshold and also does not require the use of a cluster size threshold, which improves detection of weak activations and highly focused activations. The proposed technique shows improved activation detection for both simulated and real Blood Oxygen Level Dependent fMRI data. More detailed studies are required to further develop the proposed technique.