Article

Joint Prior Models of Neighboring Objects for 3D Image Segmentation.

Departments of Electrical Engineering and Diagnostic Radiology, Yale University P.O. Box 208042, New Haven CT 06520-8042, USA.
Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition 06/2004; 1:I314-I319. DOI: 10.1109/CVPR.2004.1315048
Source: PubMed

ABSTRACT This paper presents a novel method for 3D image segmentation, where a Bayesian formulation, based on joint prior knowledge of multiple objects, along with information derived from the input image, is employed. Our method is motivated by the observation that neighboring structures have consistent locations and shapes that provide configurations and context that aid in segmentation. In contrast to the work presented earlier in [1], we define a Maximum A Posteriori (MAP) estimation model using the joint prior information of the multiple objects to realize image segmentation, which allows multiple objects with clearer boundaries to be reference objects to provide constraints in the segmentation of difficult objects. To achieve this, muiltiple signed distance functions are employed as representations of the objects in the image. We introduce a representation for the joint density function of the neighboring objects, and define joint probability distribution over the variations of objects contained in a set of training images. By estimating the MAP shapes of the objects, we formulate the joint shape prior models in terms of level set functions. We found the algorithm to be robust to noise and able to handle multidimensional data. Furthermore, it avoids the need for point correspondences during the training phase. Results and validation from various experiments on 2D/3D medical images are demonstrated.

0 Bookmarks
 · 
93 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Standard image based segmentation approaches perform poorly when there is little or no contrast along boundaries of different regions. In such cases, segmentation is largely performed manually using prior knowledge of the shape and relative location of the underlying structures combined with partially dis- cernible boundaries. We present an automated approach guided by covariant shape deformations of neighboring structures, which is an additional source of prior in- formation. Captured by a shape atlas, these deformations are transformed into a statistical model using the logistic function. Structure boundaries, anatomi- cal labels, and image inhomogeneities are estimated simultaneously within an Expectation-Maximization formulation of the maximum a posteriori probability estimation problem. We demonstrate the approach on 20 brain magnetic reso- nance images showing superior performance, particularly in cases where purely image based methods fail.
    Computer Vision for Biomedical Image Applications, First International Workshop, CVBIA 2005, Beijing, China, October 21, 2005, Proceedings; 01/2005
  • [Show abstract] [Hide abstract]
    ABSTRACT: Precision is definitely required in medical treatments, however, most three-dimensional (3-D) renderings of medical images lack for required precision. This study aimed at the development of a precise 3-D image processing method to discriminate clearly the edges. Since conventional Computed Tomography (CT), Positron Emission Tomography (PET), or Magnetic Resonance Imaging (MRI) medical images are all slice-based stacked 3-D images, one effective way to obtain precision 3-D rendering is to process the sliced data with high precision first then to stack them together carefully to reconstruct a desired 3-D image. A recent two-dimensional (2-D) image processing method known as the entropy maximization procedure proposed to combine both the gradient and the region segmentation approaches to achieve a much better result than either alone seemed to be our best choice to extend it into 3-D processing. Three examples of CT scan data of medical images were used to test the validity of our method. We found our 3-D renderings not only achieved the precision we sought but also has many interesting characteristics that shall be of significant influence to the medical practice.
    Knowledge-Based Intelligent Information and Engineering Systems, 9th International Conference, KES 2005, Melbourne, Australia, September 14-16, 2005, Proceedings, Part III; 01/2005
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Although redundancy reduction is the key for visual coding in the mammalian visual system [1,2], at a higher level, the visual understanding step, a central component of intelligence, achieves high robustness by exploiting redundancies in the images, in order to resolve uncertainty, ambiguity, or contradiction [3,4]. In this paper, an algorithmic framework, Learning Ensembles of Anatomical Patterns (LEAP), is presented for the purpose of automatic localization and parsing of human anatomy from medical images. It achieves high robustness by exploiting statistical redundancies at three levels: the anatomical level, the parts-whole level, and the voxel level in the scale space. The recognition-by-parts intuition is formulated in a more principled way as a spatial ensemble, with added redundancy and less parameter tuning for medical imaging applications. Different use cases were tested using 2D and 3D medical images, including X-ray, CT, and MRI images, for different purposes such as view identication, organ and body parts localization, and MR imaging plane detection. LEAP is shown to significantly outperform existing methods or its "non-redundant" counterparts.
    Proceedings of the 11th ACM SIGMM International Conference on Multimedia Information Retrieval, MIR 2010, Philadelphia, Pennsylvania, USA, March 29-31, 2010; 01/2010

Preview

Download
0 Downloads
Available from