Article

Filtering in SPECT Image Reconstruction.

Department of Radiology, Radiation Physics Unit, University of Athens, 76, Vas. Sophias Ave., Athens 11528, Greece.
International Journal of Biomedical Imaging 01/2011; 2011:693795. DOI: 10.1155/2011/693795
Source: PubMed

ABSTRACT Single photon emission computed tomography (SPECT) imaging is widely implemented in nuclear medicine as its clinical role in the diagnosis and management of several diseases is, many times, very helpful (e.g., myocardium perfusion imaging). The quality of SPECT images are degraded by several factors such as noise because of the limited number of counts, attenuation, or scatter of photons. Image filtering is necessary to compensate these effects and, therefore, to improve image quality. The goal of filtering in tomographic images is to suppress statistical noise and simultaneously to preserve spatial resolution and contrast. The aim of this work is to describe the most widely used filters in SPECT applications and how these affect the image quality. The choice of the filter type, the cut-off frequency and the order is a major problem in clinical routine. In many clinical cases, information for specific parameters is not provided, and findings cannot be extrapolated to other similar SPECT imaging applications. A literature review for the determination of the mostly used filters in cardiac, brain, bone, liver, kidneys, and thyroid applications is also presented. As resulting from the overview, no filter is perfect, and the selection of the proper filters, most of the times, is done empirically. The standardization of image-processing results may limit the filter types for each SPECT examination to certain few filters and some of their parameters. Standardization, also, helps in reducing image processing time, as the filters and their parameters must be standardised before being put to clinical use. Commercial reconstruction software selections lead to comparable results interdepartmentally. The manufacturers normally supply default filters/parameters, but these may not be relevant in various clinical situations. After proper standardisation, it is possible to use many suitable filters or one optimal filter.

2 Bookmarks
 · 
267 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 10(6)). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.
    Journal of Instrumentation 04/2013; 8. · 1.66 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In single photon emission computed tomography (SPECT), the collimator is a crucial element of the imaging chain and controls the noise resolution tradeoff of the collected data. The current study is an evaluation of the effects of different thicknesses of a low-energy high-resolution (LEHR) collimator on tomographic spatial resolution in SPECT. In the present study, the SIMIND Monte Carlo program was used to simulate a SPECT equipped with an LEHR collimator. A point source of (99m)Tc and an acrylic cylindrical Jaszczak phantom, with cold spheres and rods, and a human anthropomorphic torso phantom (4D-NCAT phantom) were used. Simulated planar images and reconstructed tomographic images were evaluated both qualitatively and quantitatively. According to the tabulated calculated detector parameters, contribution of Compton scattering, photoelectric reactions, and also peak to Compton (P/C) area in the obtained energy spectrums (from scanning of the sources with 11 collimator thicknesses, ranging from 2.400 to 2.410 cm), we concluded the thickness of 2.405 cm as the proper LEHR parallel hole collimator thickness. The image quality analyses by structural similarity index (SSIM) algorithm and also by visual inspection showed suitable quality images obtained with a collimator thickness of 2.405 cm. There was a suitable quality and also performance parameters' analysis results for the projections and reconstructed images prepared with a 2.405 cm LEHR collimator thickness compared with the other collimator thicknesses.
    World journal of nuclear medicine. 01/2012; 11(2):70-74.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm3) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.
    Journal of Instrumentation 07/2014; 9(07):C07004. · 1.66 Impact Factor

Full-text (4 Sources)

View
68 Downloads
Available from
May 28, 2014