A unified framework for cross-modality multi-atlas segmentation of brain MRI

Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA. Electronic address: .
Medical image analysis (Impact Factor: 3.65). 08/2013; 17(8):1181-1191. DOI: 10.1016/
Source: PubMed


Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging - in particular, when the atlases and target images are obtained via different sensor types or imaging protocols. In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations. We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion.

1 Follower
17 Reads
  • Source
    • "While multi-atlas label fusion can deal with contrast variations due to different ICs by using, for example, mutual information as a similarity measure instead of SSD, it may cause instability in nonrigid registration. Although a recent work addressed cross-modality multiatlas segmentation (Iglesias et al., 2013), its usefulness was shown only to brain magnetic resonance (MR) images and it was not applied to abdominal organs. Machine learning approaches, such as decision forests, also provide general segmentation frameworks (Criminisi et al., 2013; Glocker et al., 2012; Montillo et al., 2011; Seifert et al., 2009). "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape-location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape-location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively.
    Medical image analysis 07/2015; 26(1). DOI:10.1016/ · 3.65 Impact Factor
  • Source
    • "proposed a generative model for label fusion that allows for the estimation of a spatially varying membership function that models the target image as a combination of one or more atlases . Aiming to further reduce the influence of registration errors in the final segmentation, methods that couple registration and segmentation has been proposed [5] [6]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Multi-atlas segmentation is commonly performed in two separate steps: i) multiple pairwise registrations, and ii) fusion of the deformed segmentation masks towards labeling objects of interest. In this paper we propose an approach for integrated volume segmentation through multi-atlas registration. To tackle this problem, we opt for a graphical model where registration and segmentation nodes are coupled. The aim is to recover simultaneously all atlas deformations along with selection masks quantifying the participation of each atlas per segmentation voxel. The above is modeled using a pairwise graphical model where deformation and segmentation variables are modeled explicitly. A sequential optimization relaxation is proposed for efficient inference. Promising performance is reported on the IBSR dataset when comparing to majority voting and local appearance-based weighted voting.
    • "Atlas-based segmentation methods have been extensively investigated [36] [37]. Brain atlases can provide important data prior to tumor segmentation enhancement by measuring the difference between abnormal and normal brains. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Brain tumor segmentation is an important procedure for early tumor diagnosis and radiotherapy planning. Although numerous brain tumor segmentation methods have been presented, enhancing tumor segmentation methods is still challenging because brain tumor MRI images exhibit complex characteristics, such as high diversity in tumor appearance and ambiguous tumor boundaries. To address this problem, we propose a novel automatic tumor segmentation method for MRI images. This method treats tumor segmentation as a classification problem. Additionally, the local independent projection-based classification (LIPC) method is used to classify each voxel into different classes. A novel classification framework is derived by introducing the local independent projection into the classical classification model. Locality is important in the calculation of local independent projections for LIPC. Locality is also considered in determining whether local anchor embedding is more applicable in solving linear projection weights compared with other coding methods. Moreover, LIPC considers the data distribution of different classes by learning a softmax regression model, which can further improve classification performance. In this study, 80 brain tumor MRI images with ground truth data are used as training data and 40 images without ground truth data are used as testing data. The segmentation results of testing data are evaluated by an online evaluation tool. The average dice similarities of the proposed method for segmenting complete tumor, tumor core, and contrast-enhancing tumor on real patient data are 0.84, 0.685, and 0.585, respectively. These results are comparable to other state-of-the-art methods.
    IEEE transactions on bio-medical engineering 10/2014; 61(10):2633-2645. DOI:10.1109/TBME.2014.2325410 · 2.35 Impact Factor
Show more

Similar Publications