Article

A unified framework for cross-modality multi-atlas segmentation of brain MRI

Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA. Electronic address: .
Medical image analysis (Impact Factor: 3.68). 08/2013; 17(8):1181-1191. DOI: 10.1016/j.media.2013.08.001
Source: PubMed

ABSTRACT Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging - in particular, when the atlases and target images are obtained via different sensor types or imaging protocols. In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations. We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion.

0 Followers
 · 
65 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Brain tumor segmentation is an important procedure for early tumor diagnosis and radiotherapy planning. Although numerous brain tumor segmentation methods have been presented, enhancing tumor segmentation methods is still challenging because brain tumor MRI images exhibit complex characteristics, such as high diversity in tumor appearance and ambiguous tumor boundaries. To address this problem, we propose a novel automatic tumor segmentation method for MRI images. This method treats tumor segmentation as a classification problem. Additionally, the local independent projection-based classification (LIPC) method is used to classify each voxel into different classes. A novel classification framework is derived by introducing the local independent projection into the classical classification model. Locality is important in the calculation of local independent projections for LIPC. Locality is also considered in determining whether local anchor embedding is more applicable in solving linear projection weights compared with other coding methods. Moreover, LIPC considers the data distribution of different classes by learning a softmax regression model, which can further improve classification performance. In this study, 80 brain tumor MRI images with ground truth data are used as training data and 40 images without ground truth data are used as testing data. The segmentation results of testing data are evaluated by an online evaluation tool. The average dice similarities of the proposed method for segmenting complete tumor, tumor core, and contrast-enhancing tumor on real patient data are 0.84, 0.685, and 0.585, respectively. These results are comparable to other state-of-the-art methods.
    IEEE transactions on bio-medical engineering 10/2014; 61(10):2633-2645. DOI:10.1109/TBME.2014.2325410 · 2.23 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Multi-atlas segmentation infers the target image segmentation by combining prior anatomical knowledge encoded in multiple atlases. It has been quite successfully applied to medical image segmentation in the recent years, resulting in highly accurate and robust segmentation for many anatomical structures. However, to guide the label fusion process, most existing multi-atlas segmentation methods only utilise the intensity information within a small patch during the label fusion process and may neglect other useful information such as gradient and contextual information (the appearance of surrounding regions). This paper proposes to combine the intensity, gradient and contextual information into an augmented feature vector and incorporate it into multi-atlas segmentation. Also, it explores the alternative to the K nearest neighbour (KNN) classifier in performing multi-atlas label fusion, by using the support vector machine (SVM) for label fusion instead. Experimental results on a short-axis cardiac MR data set of 83 subjects have demonstrated that the accuracy of multi-atlas segmentation can be significantly improved by using the augmented feature vector. The mean Dice metric of the proposed segmentation framework is 0.81 for the left ventricular myocardium on this data set, compared to 0.79 given by the conventional multi-atlas patch-based segmentation (Coupé et al., 2011; Rousseau et al., 2011). A major contribution of this paper is that it demonstrates that the performance of non-local patch-based segmentation can be improved by using augmented features.
    Medical Image Analysis 09/2014; 19(1). DOI:10.1016/j.media.2014.09.005 · 3.68 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally - fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. The proposed approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, the primary contributions of this manuscript are: (1) we provide a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) confusion matrices for each rater, (2) we highlight the amenability of the proposed hierarchical formulation to many of the state-of-the-art advancements to the statistical fusion framework, and (3) we demonstrate statistically significant improvement on both simulated and empirical data. Specifically, both theoretically and empirically, we show that the proposed hierarchical performance model provides substantial and significant accuracy benefits when applied to two disparate multi-atlas segmentation tasks: (1) 133 label whole-brain anatomy on structural MR, and (2) orbital anatomy on CT.
    Medical Image Analysis 10/2014; 18(7). DOI:10.1016/j.media.2014.06.005 · 3.68 Impact Factor