Article

Deformable M-Reps for 3D Medical Image Segmentation.

Medical Image Display & Analysis Group, University of North Carolina, Chapel Hill.
International Journal of Computer Vision (Impact Factor: 3.53). 11/2003; 55(2-3):85-106. DOI: 10.1023/A:1026313132218
Source: PubMed

ABSTRACT M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures - each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported.

Download full-text

Full-text

Available from: Sarang C Joshi, Jul 01, 2015
1 Follower
 · 
105 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The basic, still unanswered question about visual object representation is this: what specific information is encoded by neural signals? Theorists have long predicted that neurons would encode medial axis or skeletal object shape, yet recent studies reveal instead neural coding of boundary or surface shape. Here, we addressed this theoretical/experimental disconnect, using adaptive shape sampling to demonstrate explicit coding of medial axis shape in high-level object cortex (macaque monkey inferotemporal cortex or IT). Our metric shape analyses revealed a coding continuum, along which most neurons represent a configuration of both medial axis and surface components. Thus, IT response functions embody a rich basis set for simultaneously representing skeletal and external shape of complex objects. This would be especially useful for representing biological shapes, which are often characterized by both complex, articulated skeletal structure and specific surface features.
    Neuron 06/2012; 74(6):1099-113. DOI:10.1016/j.neuron.2012.04.029 · 15.98 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a fusion of the active appearance model (AAM) and the Rie-mannian elasticity framework which yields a non-linear shape model and a linear tex-ture model – the active elastic appearance model (EAM). The non-linear elasticity shape model is more flexible than the usual linear subspace model, and it is therefore able to capture more complex shape variations. Local rotation and translation invariance are the primary explanation for the additional flexibility. In addition, we introduce global scale invariance into the Riemannian elasticity framework which together with the local trans-lation and rotation invariances eliminate the need for separate pose estimation. The new approach was tested against AAM in three experiments; face labeling, face labeling with poor initialization and corpus callosum segmentation. In all the examples the EAM per-formed significantly better than AAM. Our Matlab implementation can be downloaded through svn from https://svn.imm.dtu.dk/AAMLab/svn/AAMLab/trunk/ .
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a nonparametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute the final segmentation of the test subject. Such label fusion methods have been shown to yield accurate segmentation, since the use of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures. To the best of our knowledge, this manuscript presents the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multiatlas segmentation algorithms are interpreted as special cases of our framework. We conduct two sets of experiments to validate the proposed methods. In the first set of experiments, we use 39 brain MRI scans-with manually segmented white matter, cerebral cortex, ventricles and subcortical structures-to compare different label fusion algorithms and the widely-used FreeSurfer whole-brain segmentation tool. Our results indicate that the proposed framework yields more accurate segmentation than FreeSurfer and previous label fusion algorithms. In a second experiment, we use brain MRI scans of 282 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal volume changes in a study of aging and Alzheimer's Disease.
    10/2010; 29(10):1714-29. DOI:10.1109/TMI.2010.2050897