Stephen M. Pizer

University of North Carolina at Chapel Hill, Chapel Hill, NC, United States

Are you Stephen M. Pizer?

Claim your profile

Publications (270)232.35 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper discusses a novel framework to analyze rotational deformations of real 3D objects. The rotational deformations such as twisting or bending have been observed as the major variation in some medical applications, where the features of the deformed 3D objects are directional data. We propose modeling and estimation of the global deformations in terms of generalized rotations of directions. The proposed method can be cast as a generalized small circle fitting on the unit sphere. We also discuss the estimation of descriptors for more complex deformations composed of two simple deformations. The proposed method can be used for a number of different 3D object models. Two analyses of 3D object data are presented in detail: one using skeletal representations in medical image analysis as well as one from biomechanical gait analysis of the knee joint. Supplementary Materials are available online.
    Journal of Computational and Graphical Statistics 08/2015; · 1.27 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In image-guided radiotherapy (IGRT) of disease sites subject to respiratory motion, soft tissue deformations can affect localization accuracy. We describe the application of a method of 2D/3D deformable registration to soft tissue localization in abdomen. The method, called Registration Efficiency and Accuracy through Learning a Metric on Shape (REALMS), is designed to support real-time IGRT. In a previously developed version of REALMS, the method interpolated 3D deformation parameters for any credible deformation in a deformation space using a single globally-trained Riemannian metric for each parameter. We propose a refinement of the method in which the metric is trained over a particular region of the deformation space, such that interpolation accuracy within that region is improved.We report on the application of the proposed algorithm to IGRT in abdominal disease sites, which is more challenging than in lung because of low intensity contrast and non-respiratory deformation. We introduce a rigid translation vector to compensate for non-respiratory deformation, and design a special regionof- interest around fiducial markers implanted near the tumor to produce a more reliable registration. Both synthetic data and actual data tests on abdominal datasets show that the localized approach achieves more accurate 2D/3D deformable registration than the global approach.
    IEEE transactions on medical imaging. 04/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Fusion between an endoscopic movie and a CT can aid specifying the tumor target volume for radiotherapy. That requires a deformable pharyngeal surface registration between a 3D endoscope reconstruction and a CT segmentation. In this paper, we propose to use local geometric features for deriving a set of initial correspondences between two surfaces, with which an association graph can be constructed for registration by spectral graph matching. We also define a new similarity measurement to provide a meaningful way for computing inter-surface affinities in the association graph. Our registration method can deal with large non-rigid anatomical deformation, as well as missing data and topology change. We tested the robustness of our method with synthetic deformations and showed registration results on real data.
    01/2014; 17(Pt 1):259-66.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object's 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region's motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method's application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof.
    Computer Vision and Image Understanding 09/2013; 117(9):1095-1106. · 1.23 Impact Factor
  • Source
    Chen-Rui Chou, Stephen Pizer
    MICCAI workshop on Medical Computer Vision; 09/2013
  • Source
    Chen-Rui Chou, Stephen Pizer
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel 2D/3D deformable registration method, called Registration Efficiency and Accuracy through Learning Metric on Shape (REALMS), that can support real-time Image-Guided Radia-tion Therapy (IGRT). The method consists of two stages: planning-time learning and registration. In the planning-time learning, it firstly models the patient's 3D deformation space from the patient's time-varying 3D planning images using a low-dimensional parametrization. Secondly, it samples deformation parameters within the deformation space and gen-erates corresponding simulated projection images from the deformed 3D image. Finally, it learns a Riemannian metric in the projection space for each deformation parameter. The learned distance metric forms a Gaussian kernel of a kernel regression that minimizes the leave-one-out regression residual of the corresponding deformation parameter. In the registration, REALMS interpolates the patient's 3D deformation param-eters using the kernel regression with the learned distance metrics. Our test results showed that REALMS can localize the tumor in 10.89 ms (91.82 fps) with 2.56 ± 1.11 mm errors using a single projection image. These promising results show REALMS's high potential to support real-time, accurate, and low-dose IGRT.
    MICCAI workshop on Medical Computer Vision; 10/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Purpose: To study the feasibility of a novel 2D/3D image registration method, called Projection Metric Learning for Shape Kernel Regression (PML-SKR), in supporting on-board x-ray imaging systems to perform real-time image-guided radiation therapy in the lung. Methods: PML-SKR works in two stages: planning and treatment. At planning stage, firstly it parameterizes the patient's respiratory deformation from the patient's treatment-planning Respiratory-Correlated CTs (RCCTs) by doing PCA analysis on the inter-phase respiratory deformations. Secondly, it simulates a set of training projection images from a set of deformed CTs where their associated deformation parameters are sampled within 3 standard deviations of the parameter's values observed in the RCCTs. Finally, it learns a Riemannian distance metric on projection intensity for each deformation parameter. The learned distance metric forms a Gaussian kernel of a kernel regression that minimizes the leave-one-out regression residual of the corresponding deformation parameter. At treatment stage, PML-SKR interpolates the patient's 3D deformation parameters from the parameter's values in the training cases using the kernel regression with the learned distance metrics. Results: We tested PML-SKR on the NST (Nanotube Stationary Tomosynthesis) x-ray imaging system. In each test case, a DRR (dimension: 64×64) of an x-ray source in the NST was simulated from a target CT for registration. The target CTs were deformed by normally distributed random samples of the first three deformation parameters. We generated 300 synthetic test cases from 3 lung datasets and measured the registration quality by the mTRE (mean Target Registration Error) over all cases and all voxels at tumor sites. With PML-SKR's registrations, the average mTRE and its standard deviation are down from 10.89±4.44 to 0.67±0.46 mm using 125 training projection images. The computation time for each registration is 12.71±0.70 ms. Conclusion: The synthetic results have shown PML-SKR's promise in supporting real-time, accurate, and low-dose lung IGRT. This work was partially supported by Siemens Medical Solutions.
    Medical Physics 06/2012; 39(6):3875-3876. · 2.91 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Purpose: To evaluate the feasibility of patient specific deformation models (PSDM) in the male pelvis for IGRT by limited angular imaging. Methods: In IGRT via limited angular imaging, insufficient angular projections are acquired to uniquely determine a 3D attenuation distribution. For highly limited geometries, image quality may be too poor for successful non-rigid registration. This can be overcome by restricting the transformation space to one containing only feasible transformations learned from prior 3D images. This has been successfully applied in the lung region where a majority of deformation is due to respiratory motion which can be adequately observed at planning time with RCCT. Typically, the phases of the RCCT are registered together to form an group-wise mean image and transformations to each training image. PCA is then performed on the transformation displacement vector fields. The transformation is found at treatment time by registration of digitally reconstructed radiographs of the transformed image to the measured projections, optimizing over the parameters of the PCA subspace. In the male pelvis, deformation is much more complicated than respiratory deformation and is largely inter-fractional due to changes in bladder and rectal contents, articulation, and motion of the bowels. A similar model is developed for the male pelvis which takes into account pelvic anatomical information and handles the more complicated deformation space. Results: Using the leave-one-out method, dice similarity coefficients in the prostate compared with manual segmentations are increased over the those obtained by rigid registration and are comparable with those obtained by 3D non-rigid registration methods. Conclusions: This method produces better results than rigid registration and is comparable with results obtained by 3D/3D registration even though it uses limited angle projections. However, its relies on daily training CTs, so it is not yet a viable clinical method. Funding provided in part by Siemens Medical.
    Medical Physics 06/2012; 39(6):3667. · 2.91 Impact Factor
  • Source
    Fourth International (MICCAI) Workshop on Pulmonary Image Analysis; 09/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we introduce a new texture metamorphosis approach for interpolating texture samples from a source texture into a target texture. We use a new energy optimization scheme derived from optimal control principles which exploits the structure of the metamorphosis optimality conditions. Our approach considers the change in pixel position and pixel appearance in a single framework. In contrast to previous techniques that compute a global warping based on feature masks of textures, our approach allows to transform one texture into another by considering both intensity values and structural features of textures simultaneously. We demonstrate the usefulness of our approach for different textures, such as stochastic, semi‐structural and regular textures, with different levels of complexities. Our method produces visually appealing transformation sequences with no user interaction.
    Computer Graphics Forum 01/2011; 30:2341-2353. · 1.64 Impact Factor
  • Source
    Proceedings of an International Symposium on the Occasion of the 25th Anniversary of McGill University Centre for Intelligent Machines; 11/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: One goal of statistical shape analysis is the discrimination between two populations of objects. Whereas traditional shape analysis was mostly concerned with single objects, analysis of multi-object complexes presents new challenges related to alignment and pose. In this paper, we present a methodology for discriminant analysis of multiple objects represented by sampled medial manifolds. Non-euclidean metrics that describe geodesic distances between sets of sampled representations are used for alignment and discrimination. Our choice of discriminant method is the distance-weighted discriminant because of its generalization ability in high-dimensional, low sample size settings. Using an unbiased, soft discrimination score, we associate a statistical hypothesis test with the discrimination results. We explore the effectiveness of different choices of features as input to the discriminant analysis, using measures like volume, pose, shape, and the combination of pose and shape. Our method is applied to a longitudinal pediatric autism study with 10 subcortical brain structures in a population of 70 subjects. It is shown that the choices of type of global alignment and of intrinsic versus extrinsic shape features, the latter being sensitive to relative pose, are crucial factors for group discrimination and also for explaining the nature of shape change in terms of the application domain.
    IEEE Transactions on Software Engineering 04/2010; 32(4):652-61. · 2.59 Impact Factor
  • Source
    Xiaoxiao Liu, Ipek Oguz, Stephen M Pizer, Gig S Mageras
    [Show abstract] [Hide abstract]
    ABSTRACT: 4D image-guided radiation therapy (IGRT) for free-breathing lungs is challenging due to the complicated respiratory dynamics. Effective modeling of respiratory motion is crucial to account for the motion affects on the dose to tumors. We propose a shape-correlated statistical model on dense image deformations for patient-specic respiratory motion estimation in 4D lung IGRT. Using the shape deformations of the high-contrast lungs as the surrogate, the statistical model trained from the planning CTs can be used to predict the image deformation during delivery verication time, with the assumption that the respiratory motion at both times are similar for the same patient. Dense image deformation fields obtained by diffeomorphic image registrations characterize the respiratory motion within one breathing cycle. A point-based particle optimization algorithm is used to obtain the shape models of lungs with group-wise surface correspondences. Canonical correlation analysis (CCA) is adopted in training to maximize the linear correlation between the shape variations of the lungs and the corresponding dense image deformations. Both intra- and inter-session CT studies are carried out on a small group of lung cancer patients and evaluated in terms of the tumor location accuracies. The results suggest potential applications using the proposed method.
    Proc SPIE 02/2010; 7625.
  • Source
    Sungkyu Jung, Xiaoxiao Liu, J. S. Marron, Stephen M. Pizer
    [Show abstract] [Hide abstract]
    ABSTRACT: Principal component analysis (PCA) for various types of image data is analyzed in terms of the forward and backward stepwise viewpoints. In the traditional forward view, PCA and approximating subspaces are constructed from lower dimension to higher dimension. The backward approach builds PCA in the reverse order from higher dimension to lower dimension.We see that for manifold data the backward view gives much more natural and accessible generalizations of PCA. As a backward stepwise approach, composite Principal Nested Spheres, which generalizes PCA, is proposed. In an example describing the motion of the lung based on CT images, we show that composite Principal Nested Spheres captures landmark data more succinctly than forward PCA methods.
    01/2010: pages 111-123;
  • Source
    Proceedings of the Eurographics Workshop on Visual Computing for Biomedicine, VCBM 2010, Leipzig, Germany, 2010.; 01/2010
  • Source
    MICCAI'10 Pulmonary Image Analysis Workshop; 01/2010
  • Medical Physics 01/2010; 37(6). · 2.91 Impact Factor
  • Source
    Edward L Chaney, Stephen M Pizer
    Journal of the American College of Radiology: JACR 07/2009; 6(6):455-8.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Intensity modulated radiation therapy (IMRT) for cancers in the lung remains challenging due to the complicated respiratory dynamics. We propose a shape-navigated dense image deformation model to estimate the patient-specific breathing motion using 4D respiratory correlated CT (RCCT) images. The idea is to use the shape change of the lungs, the major motion feature in the thorax image, as a surrogate to predict the corresponding dense image deformation from training.To build the statistical model, dense diffeomorphic deformations between images of all other time points to the image at end expiration are calculated, and the shapes of the lungs are automatically extracted. By correlating the shape variation with the temporally corresponding image deformation variation, a linear mapping function that maps a shape change to its corresponding image deformation is calculated from the training sample. Finally, given an extracted shape from the image at an arbitrary time point, its dense image deformation can be predicted from the pre-computed statistics.The method is carried out on two patients and evaluated in terms of the tumor and lung estimation accuracies. The result shows robustness of the model and suggests its potential for 4D lung radiation treatment planning.
    Proceedings / IEEE International Symposium on Biomedical Imaging: from nano to macro. IEEE International Symposium on Biomedical Imaging 06/2009; 2009:875-878.
  • American Association of Physics in Medicine Annual Meeting; 01/2009

Publication Stats

5k Citations
232.35 Total Impact Points


  • 1973–2013
    • University of North Carolina at Chapel Hill
      • • Department of Computer Science
      • • Department of Radiology
      • • Department of Medicine
      • • Department of Biomedical Engineering
      • • Department of Radiation Oncology
      Chapel Hill, NC, United States
  • 2007
    • Boston University
      • Department of Mathematics and Statistics
      Boston, Massachusetts, United States
  • 2003
    • Memorial Sloan-Kettering Cancer Center
      • Department of Medical Physics
      New York City, NY, United States
  • 1998
    • Brigham Young University - Hawaii
      Kahuku, Hawaii, United States
  • 1993
    • Northeastern University
      Boston, Massachusetts, United States
  • 1990
    • University of Massachusetts Medical School
      Worcester, Massachusetts, United States
  • 1989–1990
    • University of North Carolina at Charlotte
      Charlotte, North Carolina, United States