Article

Automatic myocardial segmentation in dynamic contrast enhanced perfusion MRI using Monte Carlo dropout in an encoder-decoder convolutional neural network

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Background and objective: Cardiac perfusion magnetic resonance imaging (MRI) with first pass dynamic contrast enhancement (DCE) is a useful tool to identify perfusion defects in myocardial tissues. Automatic segmentation of the myocardium can lead to efficient quantification of perfusion defects. The purpose of this study was to investigate the usefulness of uncertainty estimation in deep convolutional neural networks for automatic myocardial segmentation. Methods: A U-Net segmentation model was trained on the cardiac cine data. Monte Carlo dropout sampling of the U-Net model was performed on the dynamic perfusion datasets frame-by-frame to estimate the standard deviation (SD) maps. The uncertainty estimate based on the sum of the SD values was used to select the optimal frames for endocardial and epicardial segmentations. DCE perfusion data from 35 subjects (14 subjects with coronary artery disease, 8 subjects with hypertrophic cardiomyopathy, and 13 healthy volunteers) were evaluated. The Dice similarity scores of the proposed method were compared with those of a semi-automatic U-Net segmentation method, which involved user selection of an image frame for segmentation in the cardiac perfusion dataset. Results: The proposed method was fully automatic and did not require manual labeling of the cardiac perfusion image data for model development. The mean Dice similarity score of the proposed automatic method was 0.806 (±0.096), which was comparable to the 0.808 (±0.084) Dice score of the semi-automatic U-Net segmentation method (intraclass correlation coefficient = 0.61, P < 0.001). Conclusions: Our study demonstrated the feasibility of applying an existing model trained on cardiac cine data to dynamic cardiac perfusion data to achieve robust and automatic segmentation of the myocardium. The uncertainty estimates can be used for screening purposes, which would facilitate the cases with high endocardial and epicardial uncertainty estimates to be sent for further evaluation and correction by human experts.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Results from uncertainty and error maps can be represented by a single score that can be used for failure detection in cases where it has a strong linear relationship with Dice values. This score can comprise measures like the intersection over union overlap [14], quality metric [12], [13], mean voxel-wise uncertainty [7], [14] and the pixel-wise sum [10], [11]. ...
... Similar to what is reported in [10], [11], we found a significant correlation between the VS values and Dice coefficients. We also saw an improved mean Dice value after we removed failed segmentation maps based on VS values. ...
... To evaluate the performance of QC approaches such as aggregation and Dice prediction, we used statistical metrics. Similar as in [10], [11], to aggregate uncertainty and error maps we used a voxelwise sum (VS) measure. As demonstrated in [4], [45], small isolated WMH clusters are more difficult to estimate correctly. ...
Preprint
Machine learning algorithms underpin modern diagnostic-aiding software, which has proved valuable in clinical practice, particularly in radiology. However, inaccuracies, mainly due to the limited availability of clinical samples for training these algorithms, hamper their wider applicability, acceptance, and recognition amongst clinicians. We present an analysis of state-of-the-art automatic quality control (QC) approaches that can be implemented within these algorithms to estimate the certainty of their outputs. We validated the most promising approaches on a brain image segmentation task identifying white matter hyperintensities (WMH) in magnetic resonance imaging data. WMH are a correlate of small vessel disease common in mid-to-late adulthood and are particularly challenging to segment due to their varied size, and distributional patterns. Our results show that the aggregation of uncertainty and Dice prediction were most effective in failure detection for this task. Both methods independently improved mean Dice from 0.82 to 0.84. Our work reveals how QC methods can help to detect failed segmentation cases and therefore make automatic segmentation more reliable and suitable for clinical practice.
... An additional useful property of CNNs is that they lend themselves well to be used with transfer learning where the majority of the network is kept with its high-level feature extraction ability and only the last output layer is exchanged with a new layer to fit with the purpose of the study [51]. As a result, the majority of deep learning studies on perfusion imaging in the last few years have used CNNs as the main architecture, as shown in Fig. 3. Furthermore, the power and flexibility of CNNs has opened the window for deep learning applications in more challenging image analysis domains such as stress perfusion CMR [23,27,30,31], resting CT perfusion (rCTP), and myocardial contrast echocardiography (MCE) [22]. ...
... There are some promising data on the effectiveness of using deep learning with CNNs to the pre-processing stage of perfusion quantification in CMR by automated identification of anatomical landmarks, such as the right ventricle (RV) insertion point into the septum and left ventricle (LV) centre on peak contrast enhancement [23,31,43]. Furthermore, CNN algorithms have been successfully applied to the segmentation of CMR perfusion images [27,30] with high performance. These applications in CMR still require further research. ...
Article
Full-text available
Background Coronary artery disease (CAD) is a leading cause of death worldwide, and the diagnostic process comprises of invasive testing with coronary angiography and non-invasive imaging, in addition to history, clinical examination, and electrocardiography (ECG). A highly accurate assessment of CAD lies in perfusion imaging which is performed by myocardial perfusion scintigraphy (MPS) and magnetic resonance imaging (stress CMR). Recently deep learning has been increasingly applied on perfusion imaging for better understanding of the diagnosis, safety, and outcome of CAD. The aim of this review is to summarise the evidence behind deep learning applications in myocardial perfusion imaging. Methods A systematic search was performed on MEDLINE and EMBASE databases, from database inception until September 29, 2020. This included all clinical studies focusing on deep learning applications and myocardial perfusion imaging, and excluded competition conference papers, simulation and animal studies, and studies which used perfusion imaging as a variable with different focus. This was followed by review of abstracts and full texts. A meta-analysis was performed on a subgroup of studies which looked at perfusion images classification. A summary receiver-operating curve (SROC) was used to compare the performance of different models, and area under the curve (AUC) was reported. Effect size, risk of bias and heterogeneity were tested. Results 46 studies in total were identified, the majority were MPS studies (76%). The most common neural network was convolutional neural network (CNN) (41%). 13 studies (28%) looked at perfusion imaging classification using MPS, the pooled diagnostic accuracy showed AUC = 0.859. The summary receiver operating curve (SROC) comparison showed superior performance of CNN (AUC = 0.894) compared to MLP (AUC = 0.848). The funnel plot was asymmetrical, and the effect size was significantly different with p value < 0.001, indicating small studies effect and possible publication bias. There was no significant heterogeneity amongst studies according to Q test (p = 0.2184). Conclusion Deep learning has shown promise to improve myocardial perfusion imaging diagnostic accuracy, prediction of patients’ events and safety. More research is required in clinical applications, to achieve better care for patients with known or suspected CAD.
... The best frames for epicardial and endocardial segmentations were chosen with the use of an uncertainty estimate calculated from the total of the SD values. Their results [52] show that it is possible to accomplish robust and automated myocardial segmentation by using a model trained on cardiac cine imaging data to dynamic myocardial perfusion data. Subjects having higher degree of ambiguity in terms of endocardial and epicardial segmentation might be forwarded for additional review and rectification by medical specialists using the uncertainty estimates as their screening tool. ...
Thesis
Full-text available
Deep learning technologies developed at an exponential rate throughout the years. Starting from Convolutional Neural Networks (CNNs) to Involutional Neural Net- works (INNs), there are several neural network (NN) architectures today, including Vision Transformers (ViT), Graph Neural Networks (GNNs), Recurrent Neural Net- works (RNNs) etc. However, uncertainty cannot be represented in these architec- tures, which poses a significant difficulty for decision-making given that capturing the uncertainties of these state-of-the-art NN structures would aid in making spe- cific judgments. Dropout is one method that may be implemented within Deep Learning (DL) networks as a technique to assess uncertainty. Dropout is applied at the inference phase to measure the uncertainty of these neural network models. This approach, commonly known as Monte Carlo Dropout (MCD), works well as a low-complexity estimation to compute uncertainty. MCD is a widely used approach to measure uncertainty in DL models, but majority of the earlier works focus on only a particular application. Furthermore, there are many state-of-the-art (SOTA) NNs that remain unexplored, with regards to that of uncertainty evaluation. There- fore an up-to-date roadmap and benchmark is required in this field of study. Our study revolved around a comprehensive analysis of the MCD approach for assessing model uncertainty in neural network models with a variety of datasets. Besides, we include SOTA NNs to explore the untouched models regarding uncertainty. In addition, we demonstrate how the model may perform better with less uncertainty by modifying NN topologies, which also reveals the causes of a model’s uncertainty. Using the results of our experiments and subsequent enhancements, we also discuss the various advantages and costs of using MCD in these NN designs. While working with reliable and robust models we propose two novel architectures, which provide outstanding performances in medical image diagnosis.
... Based on Lead II ECG signals, the Gabor-filter DCNN and DCNN models attained average accuracy rates of 99.55% and 98.74%, respectively, for the four-class classification task. Kim et al (Kim et al 2020) utilized U-Net architecture combined with the Monte Carlo dropout technique to estimate the uncertainty of the U-Net model using cardiac perfusion images for myocardial segmentation. Their approach obtained a better Dice similarity of 0.806±0.096 ...
Article
Full-text available
Myocardial infarction (MI) results in heart muscle injury due to receiving insufficient blood flow. MI is the most common cause of mortality in middle-aged and elderly individuals worldwide. To diagnose MI, clinicians need to interpret electrocardiography (ECG) signals, which requires expertise and is subject to observer bias. Artificial intelligence-based methods can be utilized to screen for or diagnose MI automatically using ECG signals. In this work, we conducted a comprehensive assessment of artificial intelligence-based approaches for MI detection based on ECG and some other biophysical signals, including machine learning (ML) and deep learning (DL) models. The performance of traditional ML methods relies on handcrafted features and manual selection of ECG signals, whereas DL models can automate these tasks. The review observed that deep convolutional neural networks (DCNNs) yielded excellent classification performance for MI diagnosis, which explains why they have become prevalent in recent years. To our knowledge, this is the first comprehensive survey of artificial intelligence techniques employed for MI diagnosis using ECG and some other biophysical signals.
... Fig. 13 depicts the experiment's segmentation findings. Kim et al. [31] proposed a U-Netbased segmentation model. The method performs frame-by-frame Monte Carlo dropout sampling of the U-net model on a dynamic blood perfusion dataset and uncertainty estimates by the excellent frame's Standard Deviation (SD) sum. ...
Article
Background: Due to the advancement of medical imaging and computer technology, machine intelligence to analyze clinical image data increases the probability of disease prevention and successful treatment. When diagnosing and detecting heart disease, medical imaging can provide high-resolution scans of every organ or tissue in the heart. The diagnostic results obtained by the imaging method are less susceptible to human interference. They can process numerous patient information, assist doctors in early detection of heart disease, intervene and treat patients, and improve the understanding of heart disease symptoms and clinical diagnosis of great significance. In a computer-aided diagnosis system, accurate segmentation of cardiac scan images is the basis and premise of subsequent thoracic function analysis and 3D image reconstruction. Existing techniques: This paper systematically reviews automatic methods and some difficulties for cardiac segmentation in radiographic images. Combined with recent advanced deep learning techniques, the feasibility of using deep learning network models for image segmentation is discussed, and the commonly used deep learning frameworks are compared. Developed insights: There are many standard methods for medical image segmentation, such as traditional methods based on regions and edges and methods based on deep learning. Because of characteristics of non-uniform grayscale, individual differences, artifacts and noise of medical images, the above image segmentation methods have certain limitations. It is tough to obtain the needed results sensitivity and accuracy when performing heart segmentation. The deep learning model proposed has achieved good results in image segmentation. Accurate segmentation improves the accuracy of disease diagnosis and reduces subsequent irrelevant computations. Summary: There are two requirements for accurate segmentation of radiological images. One is to use image segmentation to improve the development of computer-aided diagnosis. The other is to achieve complete segmentation of the heart. When there are lesions or deformities in the heart, there will be some abnormalities in the radiographic images, and the segmentation algorithm needs to segment the heart altogether. The quantity of processing inside a certain range will no longer be a restriction for real-time detection with the advancement of deep learning and the enhancement of hardware device performance.
... Notable innovations were made in specific manuscripts. One recent paper incorporated a Monte-Carlo dropout in conjunction with a U-Net to provide uncertainty estimates for the segmentation [37]. This approach may help obviate unpredictable failure, and provide clinicians some degree of understanding of the expected quality of images. ...
Article
Full-text available
Purpose of Review Anatomical segmentation has played a major role within clinical cardiology. Novel techniques through artificial intelligence-based computer vision have revolutionized this process through both automation and novel applications. This review discusses the history and clinical context of cardiac segmentation to provide a framework for a survey of recent manuscripts in artificial intelligence and cardiac segmentation. We aim to clarify for the reader the clinical question of “Why do we segment?” in order to understand the question of “Where is current research and where should be?” Recent Findings There has been increasing research in cardiac segmentation in recent years. Segmentation models are most frequently based on a U-Net structure. Multiple innovations have been added in terms of pre-processing or connection to analysis pipelines. Cardiac MRI is the most frequently segmented modality, which is due in part to the presence of publically available, moderately sized, computer vision competition datasets. Further progress in data availability, model explanation, and clinical integration are being pursued. Summary The task of cardiac anatomical segmentation has experienced massive strides forward within the past 5 years due to convolutional neural networks. These advances provide a basis for streamlining image analysis, and a foundation for further analysis both by computer and human systems. While technical advances are clear, clinical benefit remains nascent. Novel approaches may improve measurement precision by decreasing inter-reader variability and appear to also have the potential for larger-reaching effects in the future within integrated analysis pipelines.
... In contrast, the accuracy is obtained 99.92% with the same method for the intra-patient scheme. Kim et al. [ 66 ] utilized the DCNN models such as U-Net, including Semi-automatic u-Net, Automatic U-Net (AU-Net), and automated encoder-decoder u-Net with Monte Carlo dropout sampling to estimate uncertainty in u-Net model fully automatic based on the cardiac perfusion image dataset for myocardial segmentation. Their results regarding average Dice similarity criterion using the proposed AU-Net method based on uncertainty estimation of 0.806 (average ± standard deviation: ± 0.096) performs better than the semi-automatic and automatic u-Net models in terms of the same criterion with values of 0.808 (average ± standard deviation: ± 0.084) and 0.729 (average ± standard deviation: 0.147). ...
Preprint
Full-text available
Myocardial infarction (MI) results in heart muscle injury due to receiving insufficient blood flow. MI is the most common cause of mortality in middle-aged and elderly individuals around the world. To diagnose MI, clinicians need to interpret electrocardiography (ECG) signals, which requires expertise and is subject to observer bias. Artificial intelligence-based methods can be utilized to screen for or diagnose MI automatically using ECG signals. In this work, we conducted a comprehensive assessment of artificial intelligence-based approaches for MI detection based on ECG as well as other biophysical signals, including machine learning (ML) and deep learning (DL) models. The performance of traditional ML methods relies on handcrafted features and manual selection of ECG signals, whereas DL models can automate these tasks. The review observed that deep convolutional neural networks (DCNNs) yielded excellent classification performance for MI diagnosis, which explains why they have become prevalent in recent years. To our knowledge, this is the first comprehensive survey of artificial intelligence techniques employed for MI diagnosis using ECG and other biophysical signals.
... However, these methods simply enhance the ability to perceive global context information and do not implement the extraction of complex pixel features and the reuse of pixel features. The latest segmentation frameworks are mainly based on the encoder-decoder architecture and have been successfully used in many computer vision tasks, including human pose estimation [36], target detection [37,38], image style [39], high-resolution and super-resolution [40,41] of the image and so on. Most methods attempt to combine features from adjacent stages to enhance lowlevel features, regardless of their different representations and global context information. ...
Article
Full-text available
Medical image segmentation plays an important role in many clinical medicines, such as medical diagnosis and computer-assisted treatment. However, due to the large quality differences, variable lesion areas and their complex shapes, medical image segmentation is a very challenging task. However, most of the recent deep learning methods ignore the global context information as well as the receptive fields of pixels and do not consider the reuse of pixel features during the feature extraction stage. In this paper, we propose DGFAU-Net, an encoder–decoder structured 2D segmentation model, to overcome the shortcomings aforementioned. In the encoder, DenseNet and AtrousCNN networks are leveraged to extract image features. The DenseNet network is mainly used to achieve the reuse of pixel features, and AtrousCNN is utilized to enhance the receptive field of pixels. In the decoder, two modules, global feature attention upsample (GFAU) and pyramid pooling attention squeeze-excitation (PPASE), are proposed. GFAU combines low-level and high-level features to generate new features with richer information for improving the perceptions of global contextual information of pixels. PPASE employs a multi-scale pooling layer to enhance the pixel’s acceptance field. In addition, Focal loss is used to balance the difference between samples of the lesion and non-lesioned areas. Extensive experiments are conducted on one local dataset and two public datasets, which are the local dataset of MRI images of carotid plaque, DRIVE vessel segmentation dataset and CHASE_DB1 vessel segmentation dataset, and the experimental results demonstrate the effectiveness of our proposed model.
... The trained ANNs with dropout can make multiple predictions to estimate uncertainty. The MCD mechanism has mainly been applied for image processing tasks with convolutional neural networks (CNNs), where the uncertainty can be evaluated visually (Kim, Kim, and Choe 2020;Myojin et al. 2019;Stoean et al. 2020). Uber rides also used MCD combined with Bayesian NN for the large-scale time series anomaly detection (Zhu and Laptev 2017). ...
Preprint
Full-text available
Precise and reliable prediction of blast fragmentation is essential for the success of mining operations. Over the years, various machine learning models using artificial neural network have developed and proven to be efficient in predicting the blast fragmentation. In this research, we design multiple-output neural networks to forecast the cumulative distribution function (CDF) of blast fragmentation to improve this prediction. The model architecture contains multiple response variables in the output layer that correspond to the CDF curve’s percentiles. We apply Monte Carlo dropout procedure to estimate the uncertainty of the predictions. Data collected from a Nui Phao open-pit mine in Vietnam are used to train and validate the performance of the model. Results suggest that multiple-output neural network models provide better accuracy than single-output neural network models that predict each percentile on a CDF independently. Whereas, Monte Carlo dropout technique can give valuable and relative reliable information during decision making. Article highlights: • Precise and reliable prediction of blast fragmentation is essential for the success of mining operations. • A predictive model based on the multi-output neural network and Monte Carlo dropout technique was designed to predict the fragmentation CDF curve in the blasting operation of an open-pit mine. • The predictive model was proven reliable and provided better accuracy than models based on a single-output neural network.
... Model/Methods used Brain tumor [106], [107], [110]- [112], [114]- [118] Base U-net [18], [109], [120] 3D U-net [81] Adversarial net; GAN [59], [108] Residual block [113] Dense block [87] Cascaded U-net [92] Residual block; Parallel U-net [44] Inception block; Up skip connections [45] Dense block; Inception block [119] 3D U-net; Residual block [19] 3D U-net, Inception block, Residual block Brain tissue [103], [121]- [124] Base U-net [28], [160] 3D U-net [161] 2.5D U-net [54] Residual block [101] Parallel U-net [41] Attention gate; Residual block White matter tracts [126], [127] U-net with modified skip connections [125] Base U-net [89] Cascaded U-net Fetal brain [128]- [130] Base U-net [131] Base U-net; 3D U-net Stroke lesion/thrombus [133]- [136] Base U-net [132] 3D U-net [69] Dense block; Inception block Cardiovascular structures [138], [140]- [142], [144], [146] Base U-net [62], [139], [143], [145] Residual block [4], [9], [10], [28] 3D U-net [86], [90] Cascaded U-net [5], [8] Cascaded 3D U-net [11] Base U-net; 3D U-net [83] Adversarial net; GAN [58] Residual block [137] Dense block [74] U-net++ [47] Inception block; Residual block [6] Cascaded 3D U-net; Residual block Prostate cancer [147], [149]- [151] Base U-net [28] 3D U-net [64] Recurrent net [58] Residual block Liver cancer [152], [153] Base U-net [21] 3D U-net Nasopharyngeal cancer [25] 3D U-net; Residual block [98] Parallel U-net [154] Modified convolution block Femur [12]- [14] 3D U-net Breast cancer [155] Base U-net [99] Parallel U-net Spinal cord [156], [157] Base U-net Blood vessels [100] Base U-net Placenta [158] Base U-net Uterus [159] Base U-net Vertebral column [17] 3D U-net ...
Preprint
Full-text available
U-net is an image segmentation technique developed primarily for medical image analysis that can precisely segment images using a scarce amount of training data. These traits provide U-net with a very high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in all major image modalities from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. As the potential of U-net is still increasing, in this review we look at the various developments that have been made in the U-net architecture and provide observations on recent trends. We examine the various innovations that have been made in deep learning and discuss how these tools facilitate U-net. Furthermore, we look at image modalities and application areas where U-net has been applied.
... The MCD mechanism to measure NN uncertainty has primarily been used for image processing tasks with convolutional neural networks (CNNs), where the found estimates can be assessed visually [13][14][15]. Nevertheless, there are also entries for time series problems. ...
Article
Full-text available
The application of echo state networks to time series prediction has provided notable results, favored by their reduced computational cost, since the connection weights require no learning. However, there is a need for general methods that guide the choice of parameters (particularly the reservoir size and ridge regression coefficient), improve the prediction accuracy, and provide an assessment of the uncertainty of the estimates. In this paper we propose such a mechanism for uncertainty quantification based on Monte Carlo dropout, where the output of a subset of reservoir units is zeroed before the computation of the output. Dropout is only performed at the test stage, since the immediate goal is only the computation of a measure of the goodness of the prediction. Results show that the proposal is a promising method for uncertainty quantification, providing a value that is either strongly correlated with the prediction error or reflects the prediction of qualitative features of the time series. This mechanism could eventually be included into the learning algorithm in order to obtain performance enhancements and alleviate the burden of parameter choice.
... Uncertainty modeling and estimation are being increasingly used in deep learning-based medical imaging applications [16,9,4,22,12,18]. These methods usually produce multiple output predictions for a single input and then measure uncertainty by aggregating information from these outputs. ...
Preprint
Full-text available
2D echocardiography is the most common imaging modality for cardiovascular diseases. The portability and relatively low-cost nature of Ultrasound (US) enable the US devices needed for performing echocardiography to be made widely available. However, acquiring and interpreting cardiac US images is operator dependent, limiting its use to only places where experts are present. Recently, Deep Learning (DL) has been used in 2D echocardiography for automated view classification, and structure and function assessment. Although these recent works show promise in developing computer-guided acquisition and automated interpretation of echocardiograms, most of these methods do not model and estimate uncertainty which can be important when testing on data coming from a distribution further away from that of the training data. Uncertainty estimates can be beneficial both during the image acquisition phase (by providing real-time feedback to the operator on acquired image's quality), and during automated measurement and interpretation. The performance of uncertainty models and quantification metric may depend on the prediction task and the models being compared. Hence, to gain insight of uncertainty modelling for left ventricular segmentation from US images, we compare three ensembling based uncertainty models quantified using four different metrics (one newly proposed) on state-of-the-art baseline networks using two publicly available echocardiogram datasets. We further demonstrate how uncertainty estimation can be used to automatically reject poor quality images and improve state-of-the-art segmentation results.
Article
Purpose: The purpose of this study was to develop and evaluate deep convolutional neural network (CNN) models for quantifying myocardial blood flow (MBF) as well as for identifying myocardial perfusion defects in dynamic cardiac computed tomography (CT) images. Methods: Adenosine stress cardiac CT perfusion data acquired from 156 patients having or being suspected with coronary artery disease were considered for model development and validation. U-net-based deep CNN models were developed to segment the aorta and myocardium and to localize anatomical landmarks. Color-coded MBF maps were obtained in short-axis slices from the apex to the base level and were used to train a deep CNN classifier. Three binary classification models were built for the detection of perfusion defect in the left anterior descending artery (LAD), the right coronary artery (RCA), and the left circumflex artery (LCX) territories. Results: Mean Dice scores were 0.94 (±0.07) and 0.86 (±0.06) for the aorta and myocardial deep learning-based segmentations, respectively. With the localization U-net, mean distance errors were 3.5 (±3.5) mm and 3.8 (±2.4) mm for the basal and apical center points, respectively. The classification models identified perfusion defects with the accuracy of mean area under the receiver operating curve (AUROC) values of 0.959 (±0.023) for LAD, 0.949 (±0.016) for RCA, and 0.957 (±0.021) for LCX. Conclusion: The presented method has the potential to fully automate the quantification of MBF and subsequently identify the main coronary artery territories with myocardial perfusion defects in dynamic cardiac CT perfusion.
Article
The non-perfusion area (NPA) of the retina is an important indicator in the visual prognosis of patients with retinal vein occlusion. Therefore, automatic detection of NPA will help its management. Deep learning models for NPA segmentation in fluorescein angiography have been reported. However, typical deep learning models do not adequately address the uncertainty of the prediction, which may lead to missed lesions and difficulties in working with medical professionals. In this study, we developed deep segmentation models with uncertainty estimation using Monte Carlo dropout and compared the accuracy of prediction and reliability of uncertainty in different models (U-Net, PSPNet, and DeepLabv3+) and uncertainty measures (standard deviation and mutual information). The study included 403 Japanese fluorescein angiography images of retinal vein occlusion. The mean Dice scores were 65.6 ± 9.6%, 66.8 ± 12.3%, and 73.6 ± 9.4% for U-Net, PSPNet, and DeepLabv3+, respectively. The uncertainty scores were best for U-Net, which suggests that the model complexity may deteriorate the quality of uncertainty estimation. Over-looked lesions and inconsistent prediction led to high uncertainty values. The results indicated that the uncertainty estimation would help decrease the risk of missed lesions.
Preprint
Full-text available
Background: Artificial intelligence techniques have shown great potential in cardiology, especially in quantifying cardiac biventricular function, volume, mass, and ejection fraction (EF). However, its use in clinical practice is not straightforward due to its poor reproducibility with cases from daily practice, among other reasons. Objectives: To validate a new artificial intelligence tool in order to quantify the cardiac biventricular function (volume, mass, and EF). To analyze its robustness in the clinical area, and the computational times compared with conventional methods. Methods: A total of 189 patients were analyzed: 89 from a regional center and 100 from a public center. The method proposes two convolutional networks that include anatomical information of the heart to reduce classification errors. Results: A high concordance (Pearson coefficient) was observed between manual quantification and the proposed quantification of cardiac function (0.98, 0.92, 0.96 and 0.8 for volumes and biventricular EF) in about 5 seconds per study. Conclusions: This method quantifies biventricular function and volumes in seconds with an accuracy equivalent to that of a specialist.
Article
Full-text available
Background: Artificial intelligence techniques have shown great potential in cardiology, especially in quantifying cardiac biventri-cular function, volume, mass, and ejection fraction (EF). However, its use in clinical practice is not straightforward due to its poor reproducibility with cases from daily practice, among other reasons. Objectives: To validate a new artificial intelligence tool in order to quantify the cardiac biventricular function (volume, mass, and EF). To analyze its robustness in the clinical area, and the computational times compared with conventional methods. Methods: A total of 189 patients were analyzed: 89 from a regional center and 100 from a public center. The method proposes two convolutional networks that include anatomical information of the heart to reduce classification errors. Results: A high concordance (Pearson coefficient) was observed between manual quantification and the proposed quantification of cardiac function (0.98, 0.92, 0.96 and 0.8 for volumes and biventricular EF) in about 5 seconds per study. Conclusions: This method quantifies biventricular function and volumes in seconds with an accuracy equivalent to that of a specialist .
Chapter
Machine learning (ML) and deep learning (DL) techniques have been increasingly applied to help diagnose coronary artery disease (CAD) as well as help with patient management decisions. Imaging has begun to play a larger role in these studies. Cardiovascular magnetic resonance (CMR) offers multiple techniques to diagnose CAD, and ML and DL have been used with these techniques in an effort to improve both the image quality and the speed of image interpretation. In particular, ML and DL have been applied to direct imaging of coronary vessel anatomy, imaging of coronary flow, and myocardial perfusion imaging. In applications aimed at imaging the coronary artery anatomy, ML and DL techniques have been used to improve image quality in reconstruction, improve the speed of reconstruction, allow for more sparse sampling of data, and enable automated evaluation of image quality. In applications of coronary flow imaging, ML and DL techniques have been used to reduce the uncertainty of phase-contrast measurements of blood velocity and flow, and physics-informed neural networks have been used to improve the modeling of flow based on both acquired image data and natural laws of motion. In myocardial perfusion imaging, ML and DL techniques have been used at multiple steps in the image analysis process to automate quantitative blood flow measurements, including motion correction, image registration, tracer kinetic modeling, and detection of perfusion defects. Future applications of ML and DL in evaluating CAD are expected to continue to develop with increasing impact in both diagnosis and patient management.
Article
The risk of coronary heart disease (CHD) clinical manifestations and patient management is estimated according to risk scores accounting multifactorial risk factors, thus failing to cover the individual cardiovascular risk. Technological improvements in the field of medical imaging, in particular, in cardiac computed tomography angiography and cardiac magnetic resonance protocols, laid the development of radiogenomics. Radiogenomics aims to integrate a huge number of imaging features and molecular profiles to identify optimal radiomic/biomarker signatures. In addition, supervised and unsupervised artificial intelligence algorithms have the potential to combine different layers of data (imaging parameters and features, clinical variables and biomarkers) and elaborate complex and specific CHD risk models allowing more accurate diagnosis and reliable prognosis prediction. Literature from the past 5 years was systematically collected from PubMed and Scopus databases, and 60 studies were selected. We speculated the applicability of radiogenomics and artificial intelligence through the application of machine learning algorithms to identify CHD and characterize atherosclerotic lesions and myocardial abnormalities. Radiomic features extracted by cardiac computed tomography angiography and cardiac magnetic resonance showed good diagnostic accuracy for the identification of coronary plaques and myocardium structure; on the other hand, few studies exploited radiogenomics integration, thus suggesting further research efforts in this field. Cardiac computed tomography angiography resulted the most used noninvasive imaging modality for artificial intelligence applications. Several studies provided high performance for CHD diagnosis, classification, and prognostic assessment even though several efforts are still needed to validate and standardize algorithms for CHD patient routine according to good medical practice.
Article
Background and objective: High-dimensional data generally contains more accurate information for medical image, e.g., computerized tomography (CT) data can depict the three dimensional structure of organs more precisely. However, the data in high-dimension often needs enormous computation and has high memory requirements in the deep learning convolution networks, while dimensional reduction usually leads to performance degradation. Methods: In this paper, a two-dimensional deep learning segmentation network was proposed for medical volume data based on multi-pinacoidal plane fusion to cover more information under the control of computation.This approach has conducive compatibility while using the model proposed to extract the global information between different inputs layers. Results: Our approach has worked in different backbone network. Using the approach, DeepUnet's Dice coefficient (Dice) and Positive Predictive Value (PPV) are 0.883 and 0.982 showing the satisfied progress. Various backbones can enjoy the profit of the method. Conclusions: Through the comparison of different backbones, it can be found that the proposed network with multi-pinacoidal plane fusion can achieve better results both quantitively and qualitatively.
Article
Full-text available
Background: Artificial intelligence techniques have shown great potential in cardiology, especially in quantifying cardiac biventri- cular function, volume, mass, and ejection fraction (EF). However, its use in clinical practice is not straightforward due to its poor reproducibility with cases from daily practice, among other reasons. Objectives: To validate a new artificial intelligence tool in order to quantify the cardiac biventricular function (volume, mass, and EF). To analyze its robustness in the clinical area, and the computational times compared with conventional methods. Methods: A total of 189 patients were analyzed: 89 from a regional center and 100 from a public center. The method proposes two convolutional networks that include anatomical information of the heart to reduce classification errors. Results: A high concordance (Pearson coefficient) was observed between manual quantification and the proposed quantification of cardiac function (0.98, 0.92, 0.96 and 0.8 for volumes and biventricular EF) in about 5 seconds per study. Conclusions: This method quantifies biventricular function and volumes in seconds with an accuracy equivalent to that of a specia- list.
Article
Full-text available
U-net is an image segmentation technique developed primarily for image segmentation tasks. These traits provide U-net with a high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in nearly all major image modalities, from CT scans and MRI to Xrays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. Given that U-net’s potential is still increasing, this narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends. We also discuss the many innovations that have advanced in deep learning and discuss how these tools facilitate U-net. In addition, we review the different image modalities and application areas that have been enhanced by U-net.
Article
Objectives Cardiac magnetic resonance (CMR) first-pass perfusion is an established noninvasive diagnostic imaging modality for detecting myocardial ischemia. A CMR perfusion sequence provides a time series of 2D images for dynamic contrast enhancement of the heart. Accurate myocardial segmentation of the perfusion images is essential for quantitative analysis and it can facilitate automated pixel-wise myocardial perfusion quantification.Methods In this study, we compared different deep learning methodologies for CMR perfusion image segmentation. We evaluated the performance of several image segmentation methods using convolutional neural networks, such as the U-Net in 2D and 3D (2D plus time) implementations, with and without additional motion correction image processing step. We also present a modified U-Net architecture with a novel type of temporal pooling layer which results in improved performance.ResultsThe best DICE scores were 0.86 and 0.90 for LV myocardium and LV cavity, while the best Hausdorff distances were 2.3 and 2.1 pixels for LV myocardium and LV cavity using 5-fold cross-validation. The methods were corroborated in a second independent test set of 20 patients with similar performance (best DICE scores 0.84 for LV myocardium).Conclusions Our results showed that the LV myocardial segmentation of CMR perfusion images is best performed using a combination of motion correction and 3D convolutional networks which significantly outperformed all tested 2D approaches. Reliable frame-by-frame segmentation will facilitate new and improved quantification methods for CMR perfusion imaging.Key Points • Reliable segmentation of the myocardium offers the potential to perform pixel level perfusion assessment. • A deep learning approach in combination with motion correction, 3D (2D + time) methods, and a deep temporal connection module produced reliable segmentation results.
Article
Full-text available
Deep learning (DL) has revolutionized the field of computer vision and image processing. In medical imaging, algorithmic solutions based on DL have been shown to achieve high performance on tasks that previously required medical experts. However, DL-based solutions for disease detection have been proposed without methods to quantify and control their uncertainty in a decision. In contrast, a physician knows whether she is uncertain about a case and will consult more experienced colleagues if needed. Here we evaluate drop-out based Bayesian uncertainty measures for DL in diagnosing diabetic retinopathy (DR) from fundus images and show that it captures uncertainty better than straightforward alternatives. Furthermore, we show that uncertainty informed decision referral can improve diagnostic performance. Experiments across different networks, tasks and datasets show robust generalization. Depending on network capacity and task/dataset difficulty, we surpass 85% sensitivity and 80% specificity as recommended by the NHS when referring 0-20% of the most uncertain decisions for further inspection. We analyse causes of uncertainty by relating intuitions from 2D visualizations to the high-dimensional image space. While uncertainty is sensitive to clinically relevant cases, sensitivity to unfamiliar data samples is task dependent, but can be rendered more robust.
Article
Full-text available
Deep learning has enabled major advances in the fields of computer vision, natural language processing, and multimedia among many others. Developing a deep learning system is arduous and complex, as it involves constructing neural network architectures, managing training/trained models, tuning optimization process, preprocessing and organizing data, etc. TensorLayer is a versatile Python library that aims at helping researchers and engineers efficiently develop deep learning systems. It offers rich abstractions for neural networks, model and data management, and parallel workflow mechanism. While boosting efficiency, TensorLayer maintains both performance and scalability. TensorLayer was released in September 2016 on GitHub, and has helped people from academia and industry develop real-world applications of deep learning.
Article
Full-text available
Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks. But to obtain well-calibrated uncertainty estimates, a grid-search over the dropout probabilities is necessary - a prohibitive operation with large models, and an impossible one with RL. We propose a new dropout variant which gives improved performance and better calibrated uncertainties. Relying on recent developments in Bayesian deep learning, we use a continuous relaxation of dropout's discrete masks. Together with a principled optimisation objective, this allows for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles. In RL this allows the agent to adapt its uncertainty dynamically as more data is observed. We analyse the proposed variant extensively on a range of tasks, and give insights into common practice in the field where larger dropout probabilities are often used in deeper model layers.
Article
Full-text available
There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model -- uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.
Article
Full-text available
Background: Quantitative assessment of myocardial blood flow (MBF) with first-pass perfusion cardiovascular magnetic resonance (CMR) requires a measurement of the arterial input function (AIF). This study presents an automated method to improve the objectivity and reduce processing time for measuring the AIF from first-pass perfusion CMR images. This automated method is used to compare the impact of different AIF measurements on MBF quantification. Methods: Gadolinium-enhanced perfusion CMR was performed on a 1.5 T scanner using a saturation recovery dual-sequence technique. Rest and stress perfusion series from 270 clinical studies were analyzed. Automated image processing steps included motion correction, intensity correction, detection of the left ventricle (LV), independent component analysis, and LV pixel thresholding to calculate the AIF signal. The results were compared with manual reference measurements using several quality metrics based on the contrast enhancement and timing characteristics of the AIF. The median and 95 % confidence interval (CI) of the median were reported. Finally, MBF was calculated and compared in a subset of 21 clinical studies using the automated and manual AIF measurements. Results: Two clinical studies were excluded from the comparison due to a congenital heart defect present in one and a contrast administration issue in the other. The proposed method successfully processed 99.63 % of the remaining image series. Manual and automatic AIF time-signal intensity curves were strongly correlated with median correlation coefficient of 0.999 (95 % CI [0.999, 0.999]). The automated method effectively selected bright LV pixels, excluded papillary muscles, and required less processing time than the manual approach. There was no significant difference in MBF estimates between manually and automatically measured AIFs (p = NS). However, different sizes of regions of interest selection in the LV cavity could change the AIF measurement and affect MBF calculation (p = NS to p = 0.03). Conclusion: The proposed automatic method produced AIFs similar to the reference manual method but required less processing time and was more objective. The automated algorithm may improve AIF measurement from the first-pass perfusion CMR images and make quantitative myocardial perfusion analysis more robust and readily available.
Article
Full-text available
Cardiovascular magnetic resonance (CMR) has become a key imaging modality in clinical cardiology practice due to its unique capabilities for non-invasive imaging of the cardiac chambers and great vessels. A wide range of CMR sequences have been developed to assess various aspects of cardiac structure and function, and significant advances have also been made in terms of imaging quality and acquisition times. A lot of research has been dedicated to the development of global and regional quantitative CMR indices that help the distinction between health and pathology. The goal of this review paper is to discuss the structural and functional CMR indices that have been proposed thus far for clinical assessment of the cardiac chambers. We include indices definitions, the requirements for the calculations, exemplar applications in cardiovascular diseases, and the corresponding normal ranges. Furthermore, we review the most recent state-of-the art techniques for the automatic segmentation of the cardiac boundaries, which are necessary for the calculation of the CMR indices. Finally, we provide a detailed discussion of the existing literature and of the future challenges that need to be addressed to enable a more robust and comprehensive assessment of the cardiac chambers in clinical practice.
Article
Full-text available
We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the fully convolutional network (FCN) architecture and its variants. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. The design of SegNet was primarily motivated by road scene understanding applications. Hence, it is efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than competing architectures and can be trained end-to-end using stochastic gradient descent. We also benchmark the performance of SegNet on Pascal VOC12 salient object segmentation and the recent SUN RGB-D indoor scene understanding challenge. We show that SegNet provides competitive performance although it is significantly smaller than other architectures. We also provide a Caffe implementation of SegNet and a webdemo at http://mi.eng.cam.ac.uk/projects/segnet/
Article
Full-text available
There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .
Article
Full-text available
Background: Many patients have symptoms suggestive of coronary artery disease (CAD) and are often evaluated with the use of diagnostic testing, although there are limited data from randomized trials to guide care. Methods: We randomly assigned 10,003 symptomatic patients to a strategy of initial anatomical testing with the use of coronary computed tomographic angiography (CTA) or to functional testing (exercise electrocardiography, nuclear stress testing, or stress echocardiography). The composite primary end point was death, myocardial infarction, hospitalization for unstable angina, or major procedural complication. Secondary end points included invasive cardiac catheterization that did not show obstructive CAD and radiation exposure. Results: The mean age of the patients was 60.8±8.3 years, 52.7% were women, and 87.7% had chest pain or dyspnea on exertion. The mean pretest likelihood of obstructive CAD was 53.3±21.4%. Over a median follow-up period of 25 months, a primary end-point event occurred in 164 of 4996 patients in the CTA group (3.3%) and in 151 of 5007 (3.0%) in the functional-testing group (adjusted hazard ratio, 1.04; 95% confidence interval, 0.83 to 1.29; P=0.75). CTA was associated with fewer catheterizations showing no obstructive CAD than was functional testing (3.4% vs. 4.3%, P=0.02), although more patients in the CTA group underwent catheterization within 90 days after randomization (12.2% vs. 8.1%). The median cumulative radiation exposure per patient was lower in the CTA group than in the functional-testing group (10.0 mSv vs. 11.3 mSv), but 32.6% of the patients in the functional-testing group had no exposure, so the overall exposure was higher in the CTA group (mean, 12.0 mSv vs. 10.1 mSv; P<0.001). Conclusions: In symptomatic patients with suspected CAD who required noninvasive testing, a strategy of initial CTA, as compared with functional testing, did not improve clinical outcomes over a median follow-up of 2 years. (Funded by the National Heart, Lung, and Blood Institute; PROMISE ClinicalTrials.gov number, NCT01174550.).
Article
Full-text available
Purpose: To develop and validate a technique for near-automated definition of myocardial regions of interest suitable for perfusion evaluation during vasodilator stress cardiac magnetic resonance (MR) imaging. Materials and methods: The institutional review board approved the study protocol, and all patients provided informed consent. Image noise density distribution was used as a basis for endocardial and epicardial border detection combined with nonrigid registration. This method was tested in 42 patients undergoing contrast material-enhanced cardiac MR imaging (at 1.5 T) at rest and during vasodilator (adenosine or regadenoson) stress, including 15 subjects with normal myocardial perfusion and 27 patients referred for coronary angiography. Contrast enhancement-time curves were near-automatically generated and were used to calculate perfusion indexes. The results were compared with results of conventional manual analysis, using quantitative coronary angiography results as a reference for stenosis greater than 50%. Statistical analyses included the Student t test, linear regression, Bland-Altman analysis, and κ statistics. Results: Analysis of one sequence required less than 1 minute and resulted in high-quality contrast enhancement curves both at rest and stress (mean signal-to-noise ratios, 17±7 [standard deviation] and 22±8, respectively), showing expected patterns of first-pass perfusion. Perfusion indexes accurately depicted stress-induced hyperemia (increased upslope, from 6.7 sec(-1)±2.3 to 15.6 sec(-1)±5.9; P<.0001). Measured segmental pixel intensities correlated highly with results of manual analysis (r=0.95). The derived perfusion indexes also correlated highly with (r up to 0.94) and showed the same diagnostic accuracy as manual analysis (area under the receiver operating characteristic curve, up to 0.72 vs 0.73). Conclusion: Despite the dynamic nature of contrast-enhanced image sequences and respiratory motion, fast near-automated detection of myocardial segments and accurate quantification of tissue contrast is feasible at rest and during vasodilator stress. This technique, shown to be as accurate as conventional manual analysis, allows detection of stress-induced perfusion abnormalities.
Conference Paper
Full-text available
In this paper we first discuss the technical challenges preventing an automated analysis of cardiac perfusion MR images and subsequently present a fully unsupervised workflow to address the problems. The proposed solution consists of key-frame detection, consecutive motion compensation, surface coil inhomogeneity correction using proton density images and robust generation of pixel-wise perfusion parameter maps. The entire processing chain has been implemented on clinical MR systems to achieve unsupervised inline analysis of perfusion MRI. Validation results are reported for 260 perfusion time series, demonstrating feasibility of the approach.
Article
Full-text available
Many patients undergoing coronary angiography because of chest pain syndromes, believed to be indicative of obstructive atherosclerosis of the epicardial coronary arteries, are found to have normal angiograms. In the past two decades, a number of studies have reported that abnormalities in the function and structure of the coronary microcirculation may occur in patients without obstructive atherosclerosis, but with risk factors or with myocardial diseases as well as in patients with obstructive atherosclerosis; furthermore, coronary microvascular dysfunction (CMD) can be iatrogenic. In some instances, CMD represents an epiphenomenon, whereas in others it is an important marker of risk or may even contribute to the pathogenesis of cardiovascular and myocardial diseases, thus becoming a therapeutic target. This review article provides an update on the clinical relevance of CMD in different clinical settings and also the implications for therapy.
Article
Full-text available
We present a framework for the analysis of short axis cardiac MRI, using statistical models of shape and appearance. The framework integrates temporal and structural constraints and avoids common optimization problems inherent in such high dimensional models. The first contribution is the introduction of an algorithm for fitting 3D active appearance models (AAMs) on short axis cardiac MRI. We observe a 44-fold increase in fitting speed and a segmentation accuracy that is on par with Gauss-Newton optimization, one of the most widely used optimization algorithms for such problems. The second contribution involves an investigation on hierarchical 2D+time active shape models (ASMs), that integrate temporal constraints and simultaneously improve the 3D AAM based segmentation. We obtain encouraging results (endocardial/epicardial error 1.43+/-0.49 mm/1.51+/-0.48 mm) on 7980 short axis cardiac MR images acquired from 33 subjects. We have placed our dataset online, for the community to use and build upon.
Article
Quantitative evaluation of diseased myocardium in cardiac magnetic resonance imaging (MRI) plays an important role in the diagnosis and prognosis of cardiovascular disease. The development of a user interface with state-of-the-art techniques would be beneficial for the efficient post-processing and analysis of cardiac images. The aim of this study was to develop a custom user interface tool for the quantitative evaluation of the short-axis left ventricle (LV) and myocardium. Modules for cine, perfusion, late gadolinium enhancement (LGE), and T1 mapping data analyses were developed in Python, and a module for three-dimensional (3D) visualization was implemented using PyQtGraph library. The U-net segmentation and manual contour correction in the user interface were effective in generating reference myocardial segmentation masks, which helped obtain labeled data for deep learning model training. The proposed U-net segmentation resulted in a mean Dice score of 0.87 (±0.02) in cine diastolic myocardial segmentation. The LV mass measurement of the proposed method showed good agreement with that of manual segmentation (intraclass correlation coefficient = 0.97, mean difference and 95% Bland-Altman limits of agreement = 4.4 ± 12.2 g). C++ implementation of voxel-wise T1 mapping and its binding via pybind11 led to a significant computational gain in calculating the T1 maps. The 3D visualization enabled fast user interactions in rotating and zooming-in/out of the 3D myocardium and scar transmurality. The custom tool has the potential to provide a fast and comprehensive analysis of the LV and myocardium from multi-parametric MRI data in clinical settings.
Article
Free access until Feb. 06, 2019 (https://authors.elsevier.com/a/1YFLEcV4K-KaG) This paper proposes a novel approach for automatic left ventricle (LV) quantification using convolutional neural networks (CNN). Methods: The general framework consists of one CNN for detecting the LV, and another for tissue classification. Also, three new deep learning architectures were proposed for LV quantification. These new CNNs introduce the ideas of sparsity and depthwise separable convolution into the U-net architecture, as well as, a residual learning strategy level-to-level. To this end, we extend the classical U-net architecture and use the generalized Jaccard distance as optimization objective function. Results: The CNNs were trained and evaluated with 140 patients from two public cardiovascular magnetic resonance datasets (Sunnybrook and Cardiac Atlas Project) by using a 5-fold cross-validation strategy. Our results demonstrate a suitable accuracy for myocardial segmentation ($\sim$0.9 Dice's coefficient), and a strong correlation with the most relevant physiological measures: 0.99 for end-diastolic and end-systolic volume, 0.97 for the left myocardial mass, 0.95 for the ejection fraction and 0.93 for the stroke volume and cardiac output. Conclusion: Our simulation and clinical evaluation results demonstrate the capability and merits of the proposed CNN to estimate different structural and functional features such as LV mass and EF which are commonly used for both prognosis and treatment of different pathologies. Significance: This paper suggests a new approach for automatic LV quantification based on deep learning where errors are comparable to the inter- and intra-operator ranges for manual contouring. Also, this approach may have important applications on motion quantification.
Article
Cardiac left ventricle (LV) quantification is among the most clinically important tasks for identification and diagnosis of cardiac disease. However, it is still a task of great challenge due to the high variability of cardiac structure across subjects and the complexity of temporal dynamics of cardiac sequences. Full quantification, i.e., to simultaneously quantify all LV indices including two areas (cavity and myocardium), six regional wall thicknesses (RWT), three LV dimensions, and one phase (Diastole or Systole), is even more challenging since the ambiguous correlations existing among these indices may impinge upon the convergence and generalization of the learning procedure. In this paper, we propose a deep multitask relationship learning network (DMTRL) for full LV quantification. The proposed DMTRL first obtains expressive and robust cardiac representations with a deep convolution neural network (CNN); then models the temporal dynamics of cardiac sequences effectively with two parallel recurrent neural network (RNN) modules. After that, it estimates the three types of LV indices under a Bayesian framework that is capable of learning multitask relationships automatically, and estimates the cardiac phase with a softmax classifier. The CNN representation, RNN temporal modeling, Bayesian multitask relationship learning, and softmax classifier establish an effective and integrated network which can be learned in an end-to-end manner. The obtained task covariance matrix captures the correlations existing among these indices, therefore leads to accurate estimation of LV indices and cardiac phase. Experiments on MR sequences of 145 subjects show that DMTRL achieves high accurate prediction, with average mean absolute error of 180 mm2, 1.39 mm, 2.51 mm for areas, RWT, dimensions and error rate of 8.2% for the phase classification. This endows our method a great potential in comprehensive clinical assessment of global, regional and dynamic cardiac function.
Article
Automated left ventricular (LV) segmentation is crucial for efficient quantification of cardiac function and morphology to aid subsequent management of cardiac pathologies. In this paper, we parameterize the complete (all short axis slices and phases) LV segmentation task in terms of the radial distances between the LV centerpoint and the endo- and epicardial contours in polar space. We then utilize convolutional neural network regression to infer these parameters. Utilizing parameter regression, as opposed to conventional pixel classification, allows the network to inherently reflect domain-specific physical constraints. We have benchmarked our approach primarily against the publicly-available left ventricle segmentation challenge (LVSC) dataset, which consists of 100 training and 100 validation cardiac MRI cases representing a heterogeneous mix of cardiac pathologies and imaging parameters across multiple centers. Our approach attained a .77 Jaccard index, which is the highest published overall result in comparison to other automated algorithms. To test general applicability, we also evaluated against the Kaggle Second Annual Data Science Bowl, where the evaluation metric was the indirect clinical measures of LV volume rather than direct myocardial contours. Our approach attained a Continuous Ranked Probability Score (CRPS) of .0124, which would have ranked tenth in the original challenge. With this we demonstrate the effectiveness of convolutional neural network regression paired with domain-specific features in clinical segmentation.
Article
We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge.
Article
Background: Recently there have been several clinical MR perfusion studies in patients with hypertrophic cardiomyopathy (HCM) who may suffer from myocardial ischemia due to coronary microvascular dysfunction. In these studies, data analysis relied on a manual procedure of tracing epicardial and endocardial borders. The goal of this work is to develop and validate a robust semi-automated analysis method for myocardial perfusion quantification in clinical HCM data. Method: Dynamic multi-slice stress perfusion MRI data were acquired from 18 HCM patients. The proposed semi-automated method required user input of two landmark selections: LV center point and RV insertion point. Automated segmentations of the endocardial and epicardial borders were performed in three short-axis slices using distance regularized level set evolution on RV, LV, and myocardial enhancement frames. Results: The proposed automated epicardial border detection method resulted in average radial distance errors of 7.5%, 9.5%, and 11.6% in basal, mid, and apical slices, respectively, when compared to manual tracing of the borders as a reference. In linear regression analysis, the highest correlation of myocardial upslope measurements was observed between the manual method and the proposed method in the anterolateral section (r=0.964), and the lowest correlation was observed in the inferoseptal section (r=0.866). Conclusion: The proposed semi-automated method for myocardial MR perfusion quantification is feasible in HCM patients who typically show (1) irregular myocardial shape and (2) low image contrast between the myocardium and its surrounding regions due to coronary microvascular disease.
Article
Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic segmentation tool for the LV from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are utilized to infer the shape of the LV. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets taken from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81mm and 0.86, versus those of 79.2%-95.62%, 0.87-0.9, 1.76-2.97mm and 0.67-0.78, obtained by other methods, respectively.
Article
Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets. © 2014 Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever and Ruslan Salakhutdinov.
Article
Deep learning tools have recently gained much attention in applied machine learning. However such tools for regression and classification do not allow us to capture model uncertainty. Bayesian models offer us the ability to reason about model uncertainty, but usually come with a prohibitive computational cost. We show that dropout in multilayer perceptron models (MLPs) can be interpreted as a Bayesian approximation. Results are obtained for modelling uncertainty for dropout MLP models - extracting information that has been thrown away so far, from existing models. This mitigates the problem of representing uncertainty in deep learning without sacrificing computational performance or test accuracy. We perform an exploratory study of the dropout uncertainty properties. Various network architectures and non-linearities are assessed on tasks of extrapolation, interpolation, and classification. We show that model uncertainty is important for classification tasks using MNIST as an example, and use the model's uncertainty in a Bayesian pipeline, with deep reinforcement learning as a concrete example.
Article
Purpose: To develop an automated framework for accurate analysis of myocardial perfusion using first-pass magnetic resonance imaging. Methods: The proposed framework consists of four processing stages. First, in order to account for heart deformations due to respiratory motion and heart contraction, a two-step registration methodology is proposed, which has the ability to account for the global and local motions of the heart. The methodology involves an affine-based registration followed by a local B-splines alignment to maximize a new similarity function based on the first- and second-order normalized mutual information. Then the myocardium is segmented using a level-set function, its evolution being constrained by three features, namely, a weighted shape prior, a pixelwise mixed object/background image intensity distribution, and an energy of a second-order binary Markov-Gibbs random field spatial model. At the third stage, residual segmentation errors and imperfection of image alignment are reduced by employing a Laplace-based registration refinement step that provides accurate pixel-on-pixel matches on all segmented frames to generate accurate parametric perfusion maps. Finally, physiology is characterized by pixel-by-pixel mapping of empirical indexes (peak signal intensity, time-to-peak, initial upslope, and the average signal change of the slowly varying agent delivery phase), based on contrast agent dynamics. Results: The authors tested our framework on 24 perfusion data sets from 8 patients with ischemic damage who are undergoing a novel myoregeneration therapy. The performance of the processing steps of our framework is evaluated using both synthetic and in-vivo data. First, our registration methodology is evaluated using realistic synthetic phantoms and a distance-based error metric, and an improvement of registration is documented using the proposed similarity measure (P-value ≤10(-4)). Second, evaluation of our segmentation using the Dice similarity coefficient, documented an average of 0.910 ± 0.037 compared to two other segmentation methods that achieved average values of 0.862 ± 0.045 and 0.844 ± 0.047. Also, the receiver operating characteristic (ROC) analysis of our multifeature segmentation yielded an area under the ROC curve of 0.92, while segmentation based intensity alone showed low performance (an area of 0.69). Moreover, our framework indicated the ability, using empirical perfusion indexes, to reveal regional perfusion improvements with therapy and transmural perfusion differences across the myocardial wall. Conclusions: By quantitative and visual assessment, our framework documented the ability to characterize regional and transmural perfusion, thereby it augmenting the ability to assess follow-up treatment for patients undergoing myoregeneration therapy. This is afforded by our framework being able to handle both global and local deformations of the heart, segment accurately the myocardial wall, and provide accurate pixel-on-pixel matches of registered perfusion images.
Article
Contrast material-enhanced myocardial perfusion imaging by using cardiac magnetic resonance (MR) imaging has, during the past decade, evolved into an accurate technique for diagnosing coronary artery disease, with excellent prognostic value. Advantages such as high spatial resolution; absence of ionizing radiation; and the ease of routine integration with an assessment of viability, wall motion, and cardiac anatomy are readily recognized. The need for training and technical expertise and the regulatory hurdles, which might prevent vendors from marketing cardiac MR perfusion imaging, may have hampered its progress. The current review considers both the technical developments and the clinical experience with cardiac MR perfusion imaging, which hopefully demonstrates that it has long passed the stage of a research technique. In fact, cardiac MR perfusion imaging is moving beyond traditional indications such as diagnosis of coronary disease to novel applications such as in congenital heart disease, where the imperatives of avoidance of ionizing radiation and achievement of high spatial resolution are of high priority. More wide use of cardiac MR perfusion imaging, and novel applications thereof, are aided by the progress in parallel imaging, high-field-strength cardiac MR imaging, and other technical advances discussed in this review. © RSNA, 2013.
Article
AimsPerfusion-cardiac magnetic resonance (CMR) has emerged as a potential alternative to single-photon emission computed tomography (SPECT) to assess myocardial ischaemia non-invasively. The goal was to compare the diagnostic performance of perfusion-CMR and SPECT for the detection of coronary artery disease (CAD) using conventional X-ray coronary angiography (CXA) as the reference standard.Methods and resultsIn this multivendor trial, 533 patients, eligible for CXA or SPECT, were enrolled in 33 centres (USA and Europe) with 515 patients receiving MR contrast medium. Single-photon emission computed tomography and CXA were performed within 4 weeks before or after CMR in all patients. The prevalence of CAD in the sample was 49%. Drop-out rates for CMR and SPECT were 5.6 and 3.7%, respectively (P = 0.21). The primary endpoint was non-inferiority of CMR vs. SPECT for both sensitivity and specificity for the detection of CAD. Readers were blinded vs. clinical data, CXA, and imaging results. As a secondary endpoint, the safety profile of the CMR examination was evaluated. For CMR and SPECT, the sensitivity scores were 0.67 and 0.59, respectively, with the lower confidence level for the difference of +0.02, indicating superiority of CMR over SPECT. The specificity scores for CMR and SPECT were 0.61 and 0.72, respectively (lower confidence level for the difference: -0.17), indicating inferiority of CMR vs. SPECT. No severe adverse events occurred in the 515 patients.Conclusion In this large multicentre, multivendor study, the sensitivity of perfusion-CMR to detect CAD was superior to SPECT, while its specificity was inferior to SPECT. Cardiac magnetic resonance is a safe alternative to SPECT to detect perfusion deficits in CAD. All rights reserved.
Article
Guidelines for triaging patients for cardiac catheterization recommend a risk assessment and noninvasive testing. We determined patterns of noninvasive testing and the diagnostic yield of catheterization among patients with suspected coronary artery disease in a contemporary national sample. From January 2004 through April 2008, at 663 hospitals in the American College of Cardiology National Cardiovascular Data Registry, we identified patients without known coronary artery disease who were undergoing elective catheterization. The patients' demographic characteristics, risk factors, and symptoms and the results of noninvasive testing were correlated with the presence of obstructive coronary artery disease, which was defined as stenosis of 50% or more of the diameter of the left main coronary artery or stenosis of 70% or more of the diameter of a major epicardial vessel. A total of 398,978 patients were included in the study. The median age was 61 years; 52.7% of the patients were men, 26.0% had diabetes, and 69.6% had hypertension. Noninvasive testing was performed in 83.9% of the patients. At catheterization, 149,739 patients (37.6%) had obstructive coronary artery disease. No coronary artery disease (defined as <20% stenosis in all vessels) was reported in 39.2% of the patients. Independent predictors of obstructive coronary artery disease included male sex (odds ratio, 2.70; 95% confidence interval [CI], 2.64 to 2.76), older age (odds ratio per 5-year increment, 1.29; 95% CI, 1.28 to 1.30), presence of insulin-dependent diabetes (odds ratio, 2.14; 95% CI, 2.07 to 2.21), and presence of dyslipidemia (odds ratio, 1.62; 95% CI, 1.57 to 1.67). Patients with a positive result on a noninvasive test were moderately more likely to have obstructive coronary artery disease than those who did not undergo any testing (41.0% vs. 35.0%; P<0.001; adjusted odds ratio, 1.28; 95% CI, 1.19 to 1.37). In this study, slightly more than one third of patients without known disease who underwent elective cardiac catheterization had obstructive coronary artery disease. Better strategies for risk stratification are needed to inform decisions and to increase the diagnostic yield of cardiac catheterization in routine clinical practice.
Article
Quantitative determination of myocardial perfusion currently involves time-consuming postprocessing. This retrospective study presents automatic postprocessing consisting of image registration and image segmentation to obtain regional signal intensity time courses and quantitative perfusion values. The automatic postprocessing was tested in 75 examinations in volunteers and patients, 57 at rest and 18 under adenosine-induced stress, and compared with a manual evaluation. In a substudy consisting of 10 examinations, the interobserver variability of the manual evaluation was investigated. Manual evaluation resulted in perfusion values with a median of 0.70 ml/g/min ranging from 0.03 to 3.68 ml/g/min. For all 75 examinations, the variability (standard deviation of the differences) between automatic and manual evaluation was 0.34 ml/g/min. Interobserver variability was of a similar order, 0.35 ml/g/min for all measurements. Automatic evaluation was successfully applied to all datasets giving results equivalent to manual evaluation. The time of user interaction for one single slice could be reduced from 25 min for manual evaluation to less than 1 min using the automatic algorithm. This reduction may allow quantitative magnetic resonance perfusion imaging to become a routine clinical procedure.
Leveraging uncertainty estimates for predicting segmentation quality
  • T Devries
  • G W Taylor