Article

A unified framework for cross-modality multi-atlas segmentation of brain MRI

Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA. Electronic address: .
Medical image analysis (Impact Factor: 3.65). 08/2013; 17(8):1181-1191. DOI: 10.1016/j.media.2013.08.001
Source: PubMed

ABSTRACT

Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging - in particular, when the atlases and target images are obtained via different sensor types or imaging protocols. In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations. We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion.

1 Follower
 · 
11 Reads
  • Source
    • "In the context of brain MRI segmentation, there is ample literature on segmenting structures such as the hippocampus, amygdala, caudate, nucleus, ventricles, etc. Multi-atlas based segmentation is the most popular approach used to achieve the aforementioned tasks. Global, semi-global or local weights are learned to propagate the voxel-wise labels from the warped atlas to the subject, as described in[12,1,10,9,5,6,2,11,4,3,13]. Most of the multi-atlas based segmentation algorithms require fully (manually) labeled atlases as input. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Fully labeled manual segmentation—a cornerstone of neuro-anatomical structure segmen-tation, is known to be a tedious, time-consuming and error-prone task even for trained experts. In this paper, we propose a novel partially labeled multiple atlas-based segmentation algorithm which can simultaneously segment multiple structures from a given image. Intra-and Inter-structural constraints are imposed to preserve spatial relationships and to propagate the seg-mentation from the labeled regions to the unlabeled regions. We present several experiments on real data sets which show that our approach yields accurate segmentations of the test data even in the absence of a large percentage of the atlas labels. Further, our approach has the ability to refine the given partially labeled atlases via a supervised learning stage.
    Full-text · Conference Paper · Apr 2016
  • Source
    • "While multi-atlas label fusion can deal with contrast variations due to different ICs by using, for example, mutual information as a similarity measure instead of SSD, it may cause instability in nonrigid registration. Although a recent work addressed cross-modality multiatlas segmentation (Iglesias et al., 2013), its usefulness was shown only to brain magnetic resonance (MR) images and it was not applied to abdominal organs. Machine learning approaches, such as decision forests, also provide general segmentation frameworks (Criminisi et al., 2013; Glocker et al., 2012; Montillo et al., 2011; Seifert et al., 2009). "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape-location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape-location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively.
    Full-text · Article · Jul 2015 · Medical image analysis
  • Source
    • "proposed a generative model for label fusion that allows for the estimation of a spatially varying membership function that models the target image as a combination of one or more atlases . Aiming to further reduce the influence of registration errors in the final segmentation, methods that couple registration and segmentation has been proposed [5] [6]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Multi-atlas segmentation is commonly performed in two separate steps: i) multiple pairwise registrations, and ii) fusion of the deformed segmentation masks towards labeling objects of interest. In this paper we propose an approach for integrated volume segmentation through multi-atlas registration. To tackle this problem, we opt for a graphical model where registration and segmentation nodes are coupled. The aim is to recover simultaneously all atlas deformations along with selection masks quantifying the participation of each atlas per segmentation voxel. The above is modeled using a pairwise graphical model where deformation and segmentation variables are modeled explicitly. A sequential optimization relaxation is proposed for efficient inference. Promising performance is reported on the IBSR dataset when comparing to majority voting and local appearance-based weighted voting.
    Preview · Article · Apr 2015
Show more