Stephen M. Pizer

University of North Carolina at Chapel Hill, North Carolina, United States

Are you Stephen M. Pizer?

Claim your profile

Publications (285)304.52 Total impact

  • Liyun Tu · Dan Yang · Jared Vicory · Xiaohong Zhang · Stephen M. Pizer · Martin Styner ·
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a scheme that propagates a reference skeletal model (s-rep) into a particular case of an object, thereby propagating the initial shape-related layout of the skeleton-to-boundary vectors, called spokes. The scheme represents the surfaces of the template as well as the target objects by spherical harmonics and computes a warp between these via a thin plate spline. To form the propagated s-rep, it applies the warp to the spokes of the template s-rep and then statistically refines. This automatic approach promises to make s-rep fitting robust for complicated objects, which allows s-rep based statistics to be available to all. The improvement in fitting and statistics is significant compared with the previous methods and in statistics compared with a state-of-the-art boundary based method.
    IEEE Signal Processing Letters 12/2015; 22(12):2269-2273. DOI:10.1109/LSP.2015.2476366 · 1.75 Impact Factor
  • Jörn Schulz · Stephen M. Pizer · J. S. Marron · Fred Godtliebsen ·
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel method to test mean differences of geometric object properties (GOPs). The method is designed for data whose representations include both Euclidean and non-Euclidean elements. It is based on advanced statistical analysis methods such as backward means on spheres. We develop a suitable permutation test to find global and simultaneously individual morphological differences between two populations based on the GOPs. To demonstrate the sensitivity of the method, an analysis exploring differences between hippocampi of first-episode schizophrenics and controls is presented. Each hippocampus is represented by a discrete skeletal representation (s-rep). We investigate important model properties using the statistics of populations. These properties are highlighted by the s-rep model that allows accurate capture of the object interior and boundary while, by design, being suitable for statistical analysis of populations of objects. By supporting non-Euclidean GOPs such as direction vectors, the proposed hypothesis test is novel in the study of morphological shape differences. Suitable difference measures are proposed for each GOP. Both global and simultaneous GOP analyses showed statistically significant differences between the first-episode schizophrenics and controls.
    Journal of Mathematical Imaging and Vision 12/2015; DOI:10.1007/s10851-015-0587-7 · 1.55 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper discusses a novel framework to analyze rotational deformations of real 3D objects. The rotational deformations such as twisting or bending have been observed as the major variation in some medical applications, where the features of the deformed 3D objects are directional data. We propose modeling and estimation of the global deformations in terms of generalized rotations of directions. The proposed method can be cast as a generalized small circle fitting on the unit sphere. We also discuss the estimation of descriptors for more complex deformations composed of two simple deformations. The proposed method can be used for a number of different 3D object models. Two analyses of 3D object data are presented in detail: one using skeletal representations in medical image analysis as well as one from biomechanical gait analysis of the knee joint. Supplementary Materials are available online.
    Journal of Computational and Graphical Statistics 08/2015; 24(2):539-560. DOI:10.1080/10618600.2014.914947 · 1.22 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Improving the shape statistics of medical image objects by generating correspondence of interior skeletal points. Synthetic objects and real world lateral ventricles segmented from MR images. Each object's interior is modeled by a skeletal representation called the s-rep, which is a quadrilaterally sampled, folded 2-sided skeletal sheet with spoke vectors proceeding from the sheet to the boundary. The skeleton is divided into three parts: up-side, down-side and fold-curve. The spokes on each part are treated separately and, using spoke interpolation, are shifted along their skeletal parts in each training sample so as to tighten the probability distribution on those spokes' geometric properties while sampling the object interior regularly. As with the surface-based correspondence method of Cates et al., entropy is used to measure both the probability distribution tightness and sampling regularity. The spokes' geometric properties are skeletal position, spoke length and spoke direction. The properties used to measure the regularity are the volumetric subregions bounded by the spokes, their quadrilateral sub-area and edge lengths on the skeletal surface and on the boundary. Evaluation on synthetic and real world lateral ventricles demonstrated improvement in the performance of statistics using the resulting probability distributions, as compared to methods based on boundary models. The evaluation measures used were generalization, specificity, and compactness. S-rep models with the proposed improved correspondence provide significantly enhanced statistics as compared to standard boundary models.
    Proceedings of SPIE - The International Society for Optical Engineering 06/2015; 9413. DOI:10.1117/12.2081245 · 0.20 Impact Factor
  • Qingyu Zhao · Chen-Rui Chou · Gig Mageras · Stephen Pizer ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In image-guided radiotherapy (IGRT) of disease sites subject to respiratory motion, soft tissue deformations can affect localization accuracy. We describe the application of a method of 2D/3D deformable registration to soft tissue localization in abdomen. The method, called Registration Efficiency and Accuracy through Learning a Metric on Shape (REALMS), is designed to support real-time IGRT. In a previously developed version of REALMS, the method interpolated 3D deformation parameters for any credible deformation in a deformation space using a single globally-trained Riemannian metric for each parameter. We propose a refinement of the method in which the metric is trained over a particular region of the deformation space, such that interpolation accuracy within that region is improved.We report on the application of the proposed algorithm to IGRT in abdominal disease sites, which is more challenging than in lung because of low intensity contrast and non-respiratory deformation. We introduce a rigid translation vector to compensate for non-respiratory deformation, and design a special regionof- interest around fiducial markers implanted near the tumor to produce a more reliable registration. Both synthetic data and actual data tests on abdominal datasets show that the localized approach achieves more accurate 2D/3D deformable registration than the global approach.
    04/2014; 33(8). DOI:10.1109/TMI.2014.2319193
  • Qingyu Zhao · Stephen Pizer · Marc Niethammer · Julian Rosenman ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Fusion between an endoscopic movie and a CT can aid specifying the tumor target volume for radiotherapy. That requires a deformable pharyngeal surface registration between a 3D endoscope reconstruction and a CT segmentation. In this paper, we propose to use local geometric features for deriving a set of initial correspondences between two surfaces, with which an association graph can be constructed for registration by spectral graph matching. We also define a new similarity measurement to provide a meaningful way for computing inter-surface affinities in the association graph. Our registration method can deal with large non-rigid anatomical deformation, as well as missing data and topology change. We tested the robustness of our method with synthetic deformations and showed registration results on real data.
  • Source
    Chen-Rui Chou · Stephen Pizer ·

    MICCAI workshop on Medical Computer Vision; 09/2013
  • Source
    Chen-Rui Chou · Brandon Frederick · Gig Mageras · Sha Chang · Stephen Pizer ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object's 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region's motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method's application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof.
    Computer Vision and Image Understanding 09/2013; 117(9):1095-1106. DOI:10.1016/j.cviu.2013.02.009 · 1.54 Impact Factor
  • Source
    Chen-Rui Chou · Stephen Pizer ·
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel 2D/3D deformable registration method, called Registration Efficiency and Accuracy through Learning Metric on Shape (REALMS), that can support real-time Image-Guided Radia-tion Therapy (IGRT). The method consists of two stages: planning-time learning and registration. In the planning-time learning, it firstly models the patient's 3D deformation space from the patient's time-varying 3D planning images using a low-dimensional parametrization. Secondly, it samples deformation parameters within the deformation space and gen-erates corresponding simulated projection images from the deformed 3D image. Finally, it learns a Riemannian metric in the projection space for each deformation parameter. The learned distance metric forms a Gaussian kernel of a kernel regression that minimizes the leave-one-out regression residual of the corresponding deformation parameter. In the registration, REALMS interpolates the patient's 3D deformation param-eters using the kernel regression with the learned distance metrics. Our test results showed that REALMS can localize the tumor in 10.89 ms (91.82 fps) with 2.56 ± 1.11 mm errors using a single projection image. These promising results show REALMS's high potential to support real-time, accurate, and low-dose IGRT.
    MICCAI workshop on Medical Computer Vision; 10/2012
  • C Frederick · C Chou · S Pizer · D Lalush · S Chang ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Purpose: To evaluate the feasibility of patient specific deformation models (PSDM) in the male pelvis for IGRT by limited angular imaging. Methods: In IGRT via limited angular imaging, insufficient angular projections are acquired to uniquely determine a 3D attenuation distribution. For highly limited geometries, image quality may be too poor for successful non-rigid registration. This can be overcome by restricting the transformation space to one containing only feasible transformations learned from prior 3D images. This has been successfully applied in the lung region where a majority of deformation is due to respiratory motion which can be adequately observed at planning time with RCCT. Typically, the phases of the RCCT are registered together to form an group-wise mean image and transformations to each training image. PCA is then performed on the transformation displacement vector fields. The transformation is found at treatment time by registration of digitally reconstructed radiographs of the transformed image to the measured projections, optimizing over the parameters of the PCA subspace. In the male pelvis, deformation is much more complicated than respiratory deformation and is largely inter-fractional due to changes in bladder and rectal contents, articulation, and motion of the bowels. A similar model is developed for the male pelvis which takes into account pelvic anatomical information and handles the more complicated deformation space. Results: Using the leave-one-out method, dice similarity coefficients in the prostate compared with manual segmentations are increased over the those obtained by rigid registration and are comparable with those obtained by 3D non-rigid registration methods. Conclusions: This method produces better results than rigid registration and is comparable with results obtained by 3D/3D registration even though it uses limited angle projections. However, its relies on daily training CTs, so it is not yet a viable clinical method. Funding provided in part by Siemens Medical.
    Medical Physics 06/2012; 39(6):3667. DOI:10.1118/1.4734899 · 2.64 Impact Factor
  • C Chou · C Frederick · S Chang · S Pizer ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Purpose: To study the feasibility of a novel 2D/3D image registration method, called Projection Metric Learning for Shape Kernel Regression (PML-SKR), in supporting on-board x-ray imaging systems to perform real-time image-guided radiation therapy in the lung. Methods: PML-SKR works in two stages: planning and treatment. At planning stage, firstly it parameterizes the patient's respiratory deformation from the patient's treatment-planning Respiratory-Correlated CTs (RCCTs) by doing PCA analysis on the inter-phase respiratory deformations. Secondly, it simulates a set of training projection images from a set of deformed CTs where their associated deformation parameters are sampled within 3 standard deviations of the parameter's values observed in the RCCTs. Finally, it learns a Riemannian distance metric on projection intensity for each deformation parameter. The learned distance metric forms a Gaussian kernel of a kernel regression that minimizes the leave-one-out regression residual of the corresponding deformation parameter. At treatment stage, PML-SKR interpolates the patient's 3D deformation parameters from the parameter's values in the training cases using the kernel regression with the learned distance metrics. Results: We tested PML-SKR on the NST (Nanotube Stationary Tomosynthesis) x-ray imaging system. In each test case, a DRR (dimension: 64×64) of an x-ray source in the NST was simulated from a target CT for registration. The target CTs were deformed by normally distributed random samples of the first three deformation parameters. We generated 300 synthetic test cases from 3 lung datasets and measured the registration quality by the mTRE (mean Target Registration Error) over all cases and all voxels at tumor sites. With PML-SKR's registrations, the average mTRE and its standard deviation are down from 10.89±4.44 to 0.67±0.46 mm using 125 training projection images. The computation time for each registration is 12.71±0.70 ms. Conclusion: The synthetic results have shown PML-SKR's promise in supporting real-time, accurate, and low-dose lung IGRT. This work was partially supported by Siemens Medical Solutions.
    Medical Physics 06/2012; 39(6):3875-3876. DOI:10.1118/1.4735824 · 2.64 Impact Factor
  • Source
    Ilknur Kabul · Stephen M. Pizer · Julian Rosenman · Marc Niethammer ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we introduce a new texture metamorphosis approach for interpolating texture samples from a source texture into a target texture. We use a new energy optimization scheme derived from optimal control principles which exploits the structure of the metamorphosis optimality conditions. Our approach considers the change in pixel position and pixel appearance in a single framework. In contrast to previous techniques that compute a global warping based on feature masks of textures, our approach allows to transform one texture into another by considering both intensity values and structural features of textures simultaneously. We demonstrate the usefulness of our approach for different textures, such as stochastic, semi‐structural and regular textures, with different levels of complexities. Our method produces visually appealing transformation sequences with no user interaction.
    Computer Graphics Forum 12/2011; 30(8):2341-2353. DOI:10.1111/j.1467-8659.2011.02067.x · 1.64 Impact Factor
  • Source

    Fourth International (MICCAI) Workshop on Pulmonary Image Analysis; 09/2011
  • C Chou · B Frederick · S Chang · S Pizer ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Purpose: To study the feasibility of a novel method, CLARET (correction in limited‐angle residues in external beam radiation), for supporting the Nanotube Stationary Tomosynthesis (NST) device to perform real‐time deformable registration in lungIGRT. Methods: We designed a method called CLARET that uses a machine learning strategy. CLARET is a two‐step process: training and IGRT. In the training stage it performs a patient‐specific training that generates sample images from a range of potential treatment deformations. Potential treatment deformations are generated from the principal variations of deformation, which are calculated between the respiratory‐correlated CTs (RCCTs) and their Fréchet mean image by diffeomorphic registration. For each such sample image it generates 2D projections by re‐projecting on the image volume. It computes multi‐scale linear regressions between the deformation parameters and the differences between the projections of the deformed CTs and those of the Fréchet mean CT. In the IGRT stage, the learned regressions are applied iteratively to the successive residues between the radiographs and those of the current estimated CT deformed by the previously predicted parameters. This iteration yields an accurate deformation field for treatment‐time 3D image generation. Results: We tested CLARET using four patients' lung RCCTs with two types of NST geometries: 1) the single‐source projection geometry and 2) the multiple‐source projection geometry. For each patient and geometry a total of 40 simulated treatment‐time NST projections were generated by reprojecting on the target CTs deformed from the patient's Fréchet mean RCCT. The mean and standard deviation of the tumor CoG location difference before registration is 0.6792±0.3267 cm. After CLARET registration, results in all geometries yielded sub‐milimeter accuracy. The average computation time is 5 seconds. Conclusions: We have demonstrated the potential of our CLARET method in supporting the NST device to provide real‐time lungIGRT with few limited‐angle projection images. This research is partially sponsored by Siemens Medical Solutions.
    Medical Physics 01/2011; 38(6):3818. DOI:10.1118/1.3613380 · 2.64 Impact Factor
  • Source
    Chen-Rui Chou · C. Brandon Frederick · Sha X. Chang · Stephen M. Pizer ·
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel patient repositioning method from limitedangle tomographic projections. It uses a machine learning strategy. Given a single planning CT image (3D) of a patient, one applies patient-specific training. Using the training results, the planning CT image, and the raw image projections collected at the treatment time, our method yields the difference between the patient's treatmenttime postition and orientation and the planning-time position and orientation. In the training, one simulates credible treatment-time movements for the patient, and by regression it formulates a multiscale model that expresses the relationship giving the patient's movements as a function of the corresponding changes in the tomographic projections. When the patient's real-time projection images are acquired at treatment time, their differences from corresponding projections of the planning-time CT followed by applications of the calculated model allows the patient's movements to be estimated. Using that estimation, the treatment-time 3D image can be estimated by transforming the planning CT image with the estimated movements,and from this, changes in the tomographic projections between those computed from the transformed CT and the real-time projection images can be calculated. The iterative, multiscale application of these steps converges to the repositioning movements. By this means, this method can overcome the deficiencies in limited-angle tomosynthesis and thus assist the clinician performing an image-guided treatment. We demonstrate the method's success in capturing patients' rigid motions with subvoxel accuracy with noise-added projection images of head and neck CTs.
    Proceedings of an International Symposium on the Occasion of the 25th Anniversary of McGill University Centre for Intelligent Machines; 11/2010
  • Source
    Paul Yushkevich · Daniel Fritsch · Stephen Pizer · Edward Chaney ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The accurate and quantitative determination of three-dimensional patie nt setup errors in conformal radiotherapy, including setup errors due to out-of -plane rotations, requires methods for registering pre-treatment, thre e-dimensional planning CT images with intra-treatment, two-dimensional portal i mages. We have developed a method for performing such a registration based on str uctural models that emphasize medial aspects of shape. Such models (1) provi de an ability to pre-select those structures in a reference imag e which are known to be reliable fiducials for registration, (2) allow for the stable recognition of the same structures in treatment portal images, and (3) can be combined with images in such a way as to yield a measure of agreement between the model and the features in the image. We describe the means for creating a model in a reference image generated from the planning CT, for deforming the model to identi fy corresponding structures in a treatment portal image, and for optim izing an objective function based on combining the deformed model with a collection of digitally reconstructed radiographs generated from CT at tentati ve poses. The optimum of the objective function yields the three-dimensional pose of the patient relative to the planning pose, thereby indicating the three-dimensi onal setup error. Pilot results using simulated images with known patient positioning e rrors have shown that such an objective function obtains an optimum very near to truth.
  • C. Chou · S. Pizer · B. Frederick · S. Chang ·

    Medical Physics 06/2010; 37(6). DOI:10.1118/1.3469030 · 2.64 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: One goal of statistical shape analysis is the discrimination between two populations of objects. Whereas traditional shape analysis was mostly concerned with single objects, analysis of multi-object complexes presents new challenges related to alignment and pose. In this paper, we present a methodology for discriminant analysis of multiple objects represented by sampled medial manifolds. Non-euclidean metrics that describe geodesic distances between sets of sampled representations are used for alignment and discrimination. Our choice of discriminant method is the distance-weighted discriminant because of its generalization ability in high-dimensional, low sample size settings. Using an unbiased, soft discrimination score, we associate a statistical hypothesis test with the discrimination results. We explore the effectiveness of different choices of features as input to the discriminant analysis, using measures like volume, pose, shape, and the combination of pose and shape. Our method is applied to a longitudinal pediatric autism study with 10 subcortical brain structures in a population of 70 subjects. It is shown that the choices of type of global alignment and of intrinsic versus extrinsic shape features, the latter being sensitive to relative pose, are crucial factors for group discrimination and also for explaining the nature of shape change in terms of the application domain.
    IEEE Transactions on Software Engineering 04/2010; 32(4):652-61. DOI:10.1109/TPAMI.2009.92 · 5.78 Impact Factor
  • Source
    Xiaoxiao Liu · Ipek Oguz · Stephen M Pizer · Gig S Mageras ·
    [Show abstract] [Hide abstract]
    ABSTRACT: 4D image-guided radiation therapy (IGRT) for free-breathing lungs is challenging due to the complicated respiratory dynamics. Effective modeling of respiratory motion is crucial to account for the motion affects on the dose to tumors. We propose a shape-correlated statistical model on dense image deformations for patient-specic respiratory motion estimation in 4D lung IGRT. Using the shape deformations of the high-contrast lungs as the surrogate, the statistical model trained from the planning CTs can be used to predict the image deformation during delivery verication time, with the assumption that the respiratory motion at both times are similar for the same patient. Dense image deformation fields obtained by diffeomorphic image registrations characterize the respiratory motion within one breathing cycle. A point-based particle optimization algorithm is used to obtain the shape models of lungs with group-wise surface correspondences. Canonical correlation analysis (CCA) is adopted in training to maximize the linear correlation between the shape variations of the lungs and the corresponding dense image deformations. Both intra- and inter-session CT studies are carried out on a small group of lung cancer patients and evaluated in terms of the tumor location accuracies. The results suggest potential applications using the proposed method.
    Proc SPIE 02/2010; 7625. DOI:10.1117/12.843974
  • Source
    Sungkyu Jung · Xiaoxiao Liu · J. S. Marron · Stephen M. Pizer ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Principal component analysis (PCA) for various types of image data is analyzed in terms of the forward and backward stepwise viewpoints. In the traditional forward view, PCA and approximating subspaces are constructed from lower dimension to higher dimension. The backward approach builds PCA in the reverse order from higher dimension to lower dimension.We see that for manifold data the backward view gives much more natural and accessible generalizations of PCA. As a backward stepwise approach, composite Principal Nested Spheres, which generalizes PCA, is proposed. In an example describing the motion of the lung based on CT images, we show that composite Principal Nested Spheres captures landmark data more succinctly than forward PCA methods.
    01/2010: pages 111-123;

Publication Stats

7k Citations
304.52 Total Impact Points


  • 1981-2015
    • University of North Carolina at Chapel Hill
      • • Department of Computer Science
      • • Department of Radiology
      • • Department of Biomedical Engineering
      North Carolina, United States
  • 1993
    • Northeastern University
      Boston, Massachusetts, United States
  • 1990
    • University of Massachusetts Medical School
      Worcester, Massachusetts, United States
  • 1989-1990
    • University of North Carolina at Charlotte
      Charlotte, North Carolina, United States