[Show abstract][Hide abstract] ABSTRACT: Ordered Subset Expectation Maximization (OSEM) is currently the most widely used image reconstruction algorithm for clinical PET. However, OSEM does not necessarily provide optimal image quality, and a number of alternative algorithms have been explored. We have recently shown that a penalized likelihood image reconstruction algorithm using the relative difference penalty, block sequential regularized expectation maximization (BSREM), achieves more accurate lesion quantitation than OSEM, and importantly, maintains acceptable visual image quality in clinical wholebody PET. The goal of this work was to evaluate lesion detectability with BSREM versus OSEM. We performed a twoalternative forced choice study using 81 patient datasets with lesions of varying contrast inserted into the liver and lung. At matched imaging noise, BSREM and OSEM showed equivalent detectability in the lungs, and BSREM outperformed OSEM in the liver. These results suggest that BSREM provides not only improved quantitation and clinically acceptable visual image quality as previously shown but also improved lesion detectability compared to OSEM. We then modeled this detectability study, applying both nonprewhitening (NPW) and channelized Hotelling (CHO) model observers to the reconstructed images. The CHO model observer showed good agreement with the human observers, suggesting that we can apply this model to future studies with varying simulation and reconstruction parameters.
[Show abstract][Hide abstract] ABSTRACT: We present a PET image reconstruction approach that aims for accurate quantitation through model-based physical corrections and rigorous noise control with clinically acceptable image properties. We focus particularly on image generation chain components that are critical to quantitation such as physical system modeling, scatter correction, patient motion correction and regularized image reconstruction. Through realistic clinical datasets with inserted lesions, we demonstrate the quantitation improvements due to detector point spread function modeling, model-based single scatter estimation and the associated object-dependent multiple scatter estimation and non-rigid patient motion estimation and motion correction. We also describe a penalized-likelihood (PL) whole-body clinical PET image reconstruction approach using the relative difference penalty that achieves superior quantitation over the clinically-widespread ordered subsets expectation maximization (OSEM) algorithm while maintaining visual image properties similar to OSEM and therefore clinical acceptability. We discuss the axial and in-plane smoothing modulation profiles that are necessary to avoid large variations in noise and resolution levels. The overall approach of accurate models for data acquisition, corrections for patient related effects and rigorous noise control greatly improve quantitation and when combined with repeatable imaging protocols, limit quantitation variability only to factors related to patient physiology and scanner performance differences.
Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012 Asia-Pacific; 01/2012
[Show abstract][Hide abstract] ABSTRACT: Accurate system modeling is essential for improved quantitation and lesion detection. Many investigators have made efforts to accurately model detector blurring using point spread functions (PSFs) in sinogram space and to incorporate them into image reconstruction for accurate quantitation. It has been observed that incorporating detector PSF into reconstruction leads to improved contrast recovery and resolution with reduced noise but introduces edge artifacts. It is not straightforward to investigate the impact of PSF kernels on image qualities because of lack of a tool to quantitatively analyze nonlinearly object-dependent OSEM. Accordingly, there have been few methods to reduce edge artifacts in a systematically object-independent way. Our goal is to analyze edge artifacts as well as contrast recovery, resolution and image noise in image reconstruction using various PSF models including full, under-modeled and no PSF kernels, and to provide a systematic solution to reduce edge artifacts without loss of contrast recovery. We focus on penalized likelihood reconstruction with quadratic regularization. Building on previous work, we derive analytical expressions for local impulse response and covariance where a PSF model mismatch exists so that one can analytically predict image qualities, such as contrast recovery, noise and edge artifacts, as a function of regularization parameters and reconstruction PSF kernels. Using the analytical tools, we show that there exists a trade-off between contrast recovery (or resolution), image noise and edge artifacts and that one can control the trade-off by tuning regularization parameters and reconstruction PSF kernels.
Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2011 IEEE; 01/2011
[Show abstract][Hide abstract] ABSTRACT: EM reconstructions with point-spread-function (PSF) modelling is performed to increase the spatial resolution in PET images. These images exhibit slower initial convergence compared to reconstructions without PSF modelling. Furthermore, they exhibit more pronounced ringing around the edges of sharp features. We investigate the effect of different objects and PSF modelling on the convergence rate and edge behaviour of the EM algorithm in two stages: (i) at the initial iterations where the updates are large and (ii) at the later iterations where the updates are small. For the initial iterations, we compare the sharpness of the EM updates with and without PSF modelling. We show via simulations that the PSF modelling during the backprojection step causes smoother updates and consequently smoother images in the early stages of the EM algorithm. For the later iterations, we approximate the image as the ML image plus a perturbation term and develop an approximate update equation for the perturbation, which depends on the Hessian (H) of the log-likelihood. Based on this equation and the spectral analysis of H, we demonstrate how edges with ringing are preserved in the later stages of the algorithm and eliminated only for the case of noiseless data reconstruction with an unrealistically high number of iterations. In addition, we provide an intuitive explanation for the creation of the edge artefacts in terms of the PSF modelling during the backprojection step.
Nuclear Science Symposium Conference Record (NSS/MIC), 2010 IEEE; 12/2010
[Show abstract][Hide abstract] ABSTRACT: Image quality was measured for varied tuning parameters of four penalized likelihood potential functions with reconstructed PET data of multiple hot spheres in a warm background. Statistical image reconstruction with potential functions that penalize differences in neighboring image voxels can produce a smoother image, but large differences that occur at physical boundaries should not be penalized and allowed to form. Over-smoothing PET images with small lesions is especially problematic because it can completely smooth a lesion's intensities into the background. Fourteen 1.0-cm spheres with a 6:1 radioactivity concentration relative to the warm background were positioned throughout a 40-cm long phantom with a 36×21-cm oval cross section. By varying the tuning parameters, multiple image sets were reconstructed with modified block sequential regularized expectation maximization statistical reconstruction algorithm using 4 potential functions: quadratic, generalized Gaussian, logCosh, and Huber. Regions of interest were positioned on the images, and the image quality was measured as contrast recovery, background variability, and signal-to-noise ratio across the ROIs. This phantom study was used to further narrow the choice of potential functions and parameter values to either improve the image quality of small lesions or avoid deteriorating them at the cost of optimizing reconstruction parameters for other image features. Neither the quadratic or logCosh potentials performed well for small lesion SNR because they either over-smoothed the lesions or under-smoothed the background, respectively. Varying the parameter values for the Huber potential had a proportional effect on the background variability and the sphere signal such that SNR was relatively fixed. Generalized Gaussian simultaneously decreased background variability and increased small lesion contrast recovery that produced SNRs as much as two-times higher than the other potential functions.
Nuclear Science Symposium Conference Record (NSS/MIC), 2010 IEEE; 10/2010
[Show abstract][Hide abstract] ABSTRACT: Many implementations of model based scatter correction (MBSC) are based on the single scatter simulation (SSS) formulation within the scan field-of-view (FOV). A fully 3D approach that models both the axial and trans-axial scatter components can accurately model scatter from hot regions in neighboring slices and outside the scan FOV resulting in greater quantitative accuracy. Herein we discuss how to incorporate the estimation of out-of-field scatter in fully 3D MBSC.
Nuclear Science Symposium Conference Record (NSS/MIC), 2009 IEEE; 12/2009
[Show abstract][Hide abstract] ABSTRACT: Successful 3D imaging requires accurate and robust methods for scatter estimation and correction. We developed a computationally efficient fully 3D approach modeling both the axial and trans-axial scatter components. Simulation results showed good agreement with the Monte Carlo scatter and improved image quality (IQ). We tested the proposed algorithm on clinical data with similar IQ improvements.
[Show abstract][Hide abstract] ABSTRACT: Incorporating all data corrections into the system model optimizes image quality in statistical iterative PET image reconstruction. We have previously shown that including attenuation, randoms and scatter in the forward 3D iterative model results in faster convergence and improved image quality for ML-OSEM. This paper extends this work to allow the accurate modeling of crystal efficiency, detector deadtime, and the native block-based detector geometry. In order to model these effects, it is necessary to perform forward and back-projections directly from image space to the projection geometry of the PET scanner, rather than to an idealized, equally spaced projection space. We have modified the distance-driven projectors to accurately model both the uneven spacing of the sinogram due to the ring curvature as well as the gaps resulting from the block structure of the scanner. This results in a reconstruction method, which can incorporate the crystal efficiency and block deadtime effects into the forward system model while maintaining the fast reconstruction times enabled by the distance driven projector design. Results on the GE Discovery STEtrade scanner show improvements in image resolution consistent with removing the interpolative smoothing of the data into the equally spaced projection space.
[Show abstract][Hide abstract] ABSTRACT: To investigate the relationship between NEC and image quality to 2D and 3D PET, while simultaneously optimizing 3D low energy threshold (LET), we have performed a series of phantom measurements. The phantom consisted of 46 1 cm fillable hollow spheres on a random grid inside a water-filled oval cylinder, 21 cm tall, 36 cm wide, and 40 cm long. The phantom was imaged on a Discovery ST PET/CT system (GE Healthcare, Milwaukee, WI) in a series of 3 min scans as it decayed from an activity of 7.2 mCi. The scans included LET settings of 375,400, and 425 keV in 3D, and 375 keV in 2D. Image signal-to-noise (SNR) was calculated and compared wash NEC. While both NEC and image quality in 3D improved for LETs above the default of 375 keV, we found that there were significant differences between NEC and image quality for 2D and 3D. Most importantly, 3D image-quality was strongly dependent on the reconstruction algorithm and its associated parameters. In conclusion, a direct measure of image quality as necessary for comparing 2D vs. 3D performance.
[Show abstract][Hide abstract] ABSTRACT: A new PET detector block has been designed to replace the standard detector of the Discovery ST PET/CT system. The new detector block is the same size as the original, but consists of an 8×6 (tangential × axial) matrix of crystals rather than the original 6×6. The new crystal dimensions are 4.7 × 6.3 × 30 mm<sup>3</sup> (tangential × axial × radial). Full PET/CT systems have been built with these detectors (Discovery STE). Most other aspects of the system are identical to the standard Discovery ST, with differences including the low energy threshold for 3D imaging (now 425 keV) and front-end electronics. Initial performance evaluation has been done, including NEMA NU2-2001 tests and imaging of the 3D Hoffman brain phantom and a neck phantom with small lesions. The system sensitivity was 1.90 counts/s/kBq in 2D, and 9.35 counts/s/kBq in 3D. Scatter fractions measured for 2D and 3D, respectively, were 18.6% and 34.5%. In 2D, the peak NEC of 89.9 kcps occurred at 47.0 kBq/cc. In 3D, the peak NEC of 74.3 kcps occurred at 8.5 kBq/cc. Spatial resolution (all expressed in mm FWHM) measured in 2D for 1 cm off-axis source 5.06 transaxial, 5.14 axial and for 10 cm source 5.45 radial, 5.86 tangential, and 6.23 axial. In 3D for 1 cm off-axis source 5.13 transaxial, 5.74 axial, and for 10 cm source 5.92 radial, 5.54 tangential, and 6.16 axial. Images of the brain and neck phantom demonstrate some improvement, compared to measurements on a standard Discovery ST.
[Show abstract][Hide abstract] ABSTRACT: In this study, we implemented a fully 3D maximum likelihood ordered subsets expectation maximization (ML-OSEM) reconstruction algorithm with two methods for corrections of randoms, and scatter coincidences: (a) measured data were pre-corrected for randoms and scatter, and (b) corrections were incorporated into the iterative algorithm. In 3D PET acquisitions, the random and scatter coincidences constitute a significant fraction of the measured coincidences. ML-OSEM reconstruction algorithms make assumptions of Poisson distributed data. Pre-corrections for random and scatter coincidences result in deviations from that assumption, potentially leading to increased noise and inconsistent convergence. Incorporating the corrections inside the loop of the iterative reconstruction preserves the Poisson nature of the data. We performed Monte Carlo simulations with different randoms fractions and reconstructed the data with the two methods. We also reconstructed clinical patient images. The two methods were compared quantitatively through contrast and noise measurements. The results indicate that for high levels of randoms, incorporating the corrections inside the iterative loop results in superior image quality.