Proceedings of SPIE - The International Society for Optical Engineering

Published by Society of Photo-optical Instrumentation Engineers

Articles


Figure 2: Normalized mutual information (NMI) values before and after registration. For 12 patient data sets, NMI values increased from 0.20 ± 0.03 to 0.25 ± 0.03 after registration.  
Automatic Intensity-based 3D-to-2D Registration of CT Volume and Dual-energy Digital Radiography for the Detection of Cardiac Calcification
  • Article
  • Full-text available

March 2007

·

1,111 Reads

Xiang Chen

·

·

We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the "gold standard" to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 ± 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 ± 0.03 to 0.25 ± 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification.
Download
Share

Detailed Characterization of 2D and 3D Scatter-to-Primary Ratios of Various Breast Geometries Using a Dedicated CT Mammotomography System

February 2011

·

378 Reads

With a dedicated breast CT system using a quasi-monochromatic x-ray source and flat-panel digital detector, the 2D and 3D scatter to primary ratios (SPR) of various geometric phantoms having different densities were characterized in detail. Projections were acquired using geometric and anthropomorphic breast phantoms. Each phantom was filled with 700ml of 5 different water-methanol concentrations to simulate effective boundary densities of breast compositions from 100% glandular (1.0g/cm(3)) to 100% fat (0.79g/cm(3)). Projections were acquired with and without a beam stop array. For each projection, 2D scatter was determined by cubic spline interpolating the values behind the shadow of each beam stop through the object. Scatter-corrected projections were obtained by subtracting the scatter, and the 2D SPRs were obtained as a ratio of the scatter to scatter-corrected projections. Additionally the (un)corrected data were individually iteratively reconstructed. The (un)corrected 3D volumes were subsequently subtracted, and the 3D SPRs obtained from the ratio of the scatter volume-to-scatter-corrected (or primary) volume. Results show that the 2D SPR values peak in the center of the volumes, and were overall highest for the simulated 100% glandular composition. Consequently, scatter corrected reconstructions have visibly reduced cupping regardless of the phantom geometry, as well as more accurate linear attenuation coefficients. The corresponding 3D SPRs have increased central density, which reduces radially. Not surprisingly, for both 2D and 3D SPRs there was a dependency on both phantom geometry and object density on the measured SPR values, with geometry dominating for 3D SPRs. Overall, these results indicate the need for scatter correction given different geometries and breast densities that will be encountered with 3D cone beam breast CT.

Figure 1. Schematic flow chart of the proposed algorithm for 3D segmentation of the prostate. 
Figure 5. Comparison be etween the prop posed method a and manual seg gmentation. Imag ges from left to o right are in th hree orientatio ons of the same TRUS T image vo olume. The line in i yellow is the manual segmen ntation result. Th he dashed line in n red is the seg gmentation result t of the proposed d method. 
3D Prostate Segmentation of Ultrasound Images Combining Longitudinal Image Registration and Machine Learning

February 2012

·

560 Reads

We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were used to extract texture features from each registered image. Patient-specific Gabor features from the registered images are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual segmentation is 1.18 ± 0.31 mm, indicating that our automatic segmentation method based on longitudinal image registration is feasible for segmenting the prostate in TRUS images.

Figure 4. (i) Subjects report increasingly blurry vision as a function increased head roll and the amplitude of oscillation. (ii) Subjects report a modest increase in nausea. Subjects do not report their eyes to water more (iii) or to become drier. (iv) Subjects report a mild increase in headaches with head roll.  
Figure 5. (i) Discomfort and percent correct as a function of the angle of head roll. Scores have been averaged across subjects. (ii) Discomfort scores as a function of time. Different symbols depict the scores in the different head roll conditions. Scores have been averaged across subjects. (iii) Discomfort averaged across all the head roll conditions.
Figure 6. Symptoms as a function of head roll and duration. (i) and (iv): Reported visual blur as a function of head roll (i) and time (iv). (ii) and (v) Nausea as a function of head roll (ii) and time (v). (iii) and (vi) Headache as a function of head roll (iii) and time (vi).
Visual Discomfort with Stereo 3D Displays when the Head is Not Upright

February 2012

·

319 Reads

Properly constructed stereoscopic images are aligned vertically on the display screen, so on-screen binocular disparities are strictly horizontal. If the viewer's inter-ocular axis is also horizontal, he/she makes horizontal vergence eye movements to fuse the stereoscopic image. However, if the viewer's head is rolled to the side, the on-screen disparities now have horizontal and vertical components at the eyes. Thus, the viewer must make horizontal and vertical vergence movements to binocularly fuse the two images. Vertical vergence movements occur naturally, but they are usually quite small. Much larger movements are required when viewing stereoscopic images with the head rotated to the side. We asked whether the vertical vergence eye movements required to fuse stereoscopic images when the head is rolled cause visual discomfort. We also asked whether the ability to see stereoscopic depth is compromised with head roll. To answer these questions, we conducted behavioral experiments in which we simulated head roll by rotating the stereo display clockwise or counter-clockwise while the viewer's head remained upright relative to gravity. While viewing the stimulus, subjects performed a psychophysical task. Visual discomfort increased significantly with the amount of stimulus roll and with the magnitude of on-screen horizontal disparity. The ability to perceive stereoscopic depth also declined with increasing roll and on-screen disparity. The magnitude of both effects was proportional to the magnitude of the induced vertical disparity. We conclude that head roll is a significant cause of viewer discomfort and that it also adversely affects the perception of depth from stereoscopic displays.

Accuracy Evaluation of a 3D Ultrasound-guided Biopsy System

March 2013

·

116 Reads

Walter J Wooten

·

Jonathan A Nye

·

David M Schuster

·

[...]

·

Early detection of prostate cancer is critical in maximizing the probability of successful treatment. Current systematic biopsy approach takes 12 or more randomly distributed core tissue samples within the prostate and can have a high potential, especially with early disease, for a false negative diagnosis. The purpose of this study is to determine the accuracy of a 3D ultrasound-guided biopsy system. Testing was conducted on prostate phantoms created from an agar mixture which had embedded markers. The phantoms were scanned and the 3D ultrasound system was used to direct the biopsy. Each phantom was analyzed with a CT scan to obtain needle deflection measurements. The deflection experienced throughout the biopsy process was dependent on the depth of the biopsy target. The results for markers at a depth of less than 20 mm, 20-30 mm, and greater than 30 mm were 3.3 mm, 4.7 mm, and 6.2 mm, respectively. This measurement encapsulates the entire biopsy process, from the scanning of the phantom to the firing of the biopsy needle. Increased depth of the biopsy target caused a greater deflection from the intended path in most cases which was due to an angular incidence of the biopsy needle. Although some deflection was present, this system exhibits a clear advantage in the targeted biopsy of prostate cancer and has the potential to reduce the number of false negative biopsies for large lesions.

3D Non-rigid Registration Using Surface and Local Salient Features for Transrectal Ultrasound Image-guided Prostate Biopsy

March 2011

·

121 Reads

We present a 3D non-rigid registration algorithm for the potential use in combining PET/CT and transrectal ultrasound (TRUS) images for targeted prostate biopsy. Our registration is a hybrid approach that simultaneously optimizes the similarities from point-based registration and volume matching methods. The 3D registration is obtained by minimizing the distances of corresponding points at the surface and within the prostate and by maximizing the overlap ratio of the bladder neck on both images. The hybrid approach not only capture deformation at the prostate surface and internal landmarks but also the deformation at the bladder neck regions. The registration uses a soft assignment and deterministic annealing process. The correspondences are iteratively established in a fuzzy-to-deterministic approach. B-splines are used to generate a smooth non-rigid spatial transformation. In this study, we tested our registration with pre- and post-biopsy TRUS images of the same patients. Registration accuracy is evaluated using manual defined anatomic landmarks, i.e. calcification. The root-mean-squared (RMS) of the difference image between the reference and floating images was decreased by 62.6±9.1% after registration. The mean target registration error (TRE) was 0.88±0.16 mm, i.e. less than 3 voxels with a voxel size of 0.38×0.38×0.38 mm(3) for all five patients. The experimental results demonstrate the robustness and accuracy of the 3D non-rigid registration algorithm.

Figure. 1. Schematic flow chart of the proposed algorithm for the 3D prostate segmentation.  
Automatic 3D Segmentation of Ultrasound Images Using Atlas Registration and Statistical Texture Prior

March 2011

·

347 Reads

We are developing a molecular image-directed, 3D ultrasound-guided, targeted biopsy system for improved detection of prostate cancer. In this paper, we propose an automatic 3D segmentation method for transrectal ultrasound (TRUS) images, which is based on multi-atlas registration and statistical texture prior. The atlas database includes registered TRUS images from previous patients and their segmented prostate surfaces. Three orthogonal Gabor filter banks are used to extract texture features from each image in the database. Patient-specific Gabor features from the atlas database are used to train kernel support vector machines (KSVMs) and then to segment the prostate image from a new patient. The segmentation method was tested in TRUS data from 5 patients. The average surface distance between our method and manual segmentation is 1.61 ± 0.35 mm, indicating that the atlas-based automatic segmentation method works well and could be used for 3D ultrasound-guided prostate biopsy.

Characterization of Image Quality for 3D Scatter Corrected Breast CT Images

March 2011

·

51 Reads

The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.

Figure 1. 
Figure 2. 
Figure 3. 
Figure 4. 
3D Segmentation of Prostate Ultrasound images Using Wavelet Transform

March 2011

·

99 Reads

The current definitive diagnosis of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. However, the current procedure is limited by using 2D biopsy tools to target 3D biopsy locations. This paper presents a new method for automatic segmentation of the prostate in three-dimensional transrectal ultrasound images, by extracting texture features and by statistically matching geometrical shape of the prostate. A set of Wavelet-based support vector machines (W-SVMs) are located and trained at different regions of the prostate surface. The WSVMs capture texture priors of ultrasound images for classification of the prostate and non-prostate tissues in different zones around the prostate boundary. In the segmentation procedure, these W-SVMs are trained in three sagittal, coronal, and transverse planes. The pre-trained W-SVMs are employed to tentatively label each voxel around the surface of the model as a prostate or non-prostate voxel by the texture matching. The labeled voxels in three planes after post-processing is overlaid on a prostate probability model. The probability prostate model is created using 10 segmented prostate data. Consequently, each voxel has four labels: sagittal, coronal, and transverse planes and one probability label. By defining a weight function for each labeling in each region, each voxel is labeled as a prostate or non-prostate voxel. Experimental results by using real patient data show the good performance of the proposed model in segmenting the prostate from ultrasound images.

Numerical 3D modeling of heat transfer in human tissues for microwave radiometry monitoring of brown fat metabolism

February 2013

·

1,386 Reads

Brown adipose tissue (BAT) plays an important role in whole body metabolism and could potentially mediate weight gain and insulin sensitivity. Although some imaging techniques allow BAT detection, there are currently no viable methods for continuous acquisition of BAT energy expenditure. We present a non-invasive technique for long term monitoring of BAT metabolism using microwave radiometry. A multilayer 3D computational model was created in HFSS™ with 1.5 mm skin, 3-10 mm subcutaneous fat, 200 mm muscle and a BAT region (2-6 cm(3)) located between fat and muscle. Based on this model, a log-spiral antenna was designed and optimized to maximize reception of thermal emissions from the target (BAT). The power absorption patterns calculated in HFSS™ were combined with simulated thermal distributions computed in COMSOL® to predict radiometric signal measured from an ultra-low-noise microwave radiometer. The power received by the antenna was characterized as a function of different levels of BAT metabolism under cold and noradrenergic stimulation. The optimized frequency band was 1.5-2.2 GHz, with averaged antenna efficiency of 19%. The simulated power received by the radiometric antenna increased 2-9 mdBm (noradrenergic stimulus) and 4-15 mdBm (cold stimulus) corresponding to increased 15-fold BAT metabolism. Results demonstrated the ability to detect thermal radiation from small volumes (2-6 cm(3)) of BAT located up to 12 mm deep and to monitor small changes (0.5 °C) in BAT metabolism. As such, the developed miniature radiometric antenna sensor appears suitable for non-invasive long term monitoring of BAT metabolism.

Figure 1. 
Figure 2. 
Figure 3. 
Figure 4. 
Figure 5. 
Nonrigid Registration and Classification of the Kidneys in 3D Dynamic Contrast Enhanced (DCE) MR Images

February 2012

·

125 Reads

We have applied image analysis methods in the assessment of human kidney perfusion based on 3D dynamic contrast-enhanced (DCE) MRI data. This approach consists of 3D non-rigid image registration of the kidneys and fuzzy C-mean classification of kidney tissues. The proposed registration method reduced motion artifacts in the dynamic images and improved the analysis of kidney compartments (cortex, medulla, and cavities). The dynamic intensity curves show the successive transition of the contrast agent through kidney compartments. The proposed method for motion correction and kidney compartment classification may be used to improve the validity and usefulness of further model-based pharmacokinetic analysis of kidney function.

Imaging of prostate cancer: A platform for 3D co-registration of in-vivo MRI ex-vivo MRI and pathology

February 2012

·

172 Reads

Multi-parametric MRI is emerging as a promising method for prostate cancer diagnosis. prognosis and treatment planning. However, the localization of in-vivo detected lesions and pathologic sites of cancer remains a significant challenge. To overcome this limitation we have developed and tested a system for co-registration of in-vivo MRI, ex-vivo MRI and histology. Three men diagnosed with localized prostate cancer (ages 54-72, PSA levels 5.1-7.7 ng/ml) were prospectively enrolled in this study. All patients underwent 3T multi-parametric MRI that included T2W, DCE-MRI, and DWI prior to robotic-assisted prostatectomy. Ex-vivo multi-parametric MRI was performed on fresh prostate specimen. Excised prostates were then sliced at regular intervals and photographed both before and after fixation. Slices were perpendicular to the main axis of the posterior capsule, i.e., along the direction of the rectal wall. Guided by the location of the urethra, 2D digital images were assembled into 3D models. Cancer foci, extra-capsular extensions and zonal margins were delineated by the pathologist and included in 3D histology data. A locally-developed software was applied to register in-vivo, ex-vivo and histology using an over-determined set of anatomical landmarks placed in anterior fibro-muscular stroma, central. transition and peripheral zones. The mean root square distance across corresponding control points was used to assess co-registration error. Two specimens were pT3a and one pT2b (negative margin) at pathology. The software successfully fused in-vivo MRI. ex-vivo MRI fresh specimen and histology using appropriate (rigid and affine) transformation models with mean square error of 1.59 mm. Coregistration accuracy was confirmed by multi-modality viewing using operator-guided variable transparency. The method enables successful co-registration of pre-operative MRI, ex-vivo MRI and pathology and it provides initial evidence of feasibility of MRI-guided surgical planning.

Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

February 2012

·

51 Reads

We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

GPU Accelerated Registration of a Statistical Shape Model of the Lumbar Spine to 3D Ultrasound Images

September 2010

·

304 Reads

Spinal needle injections are technically demanding procedures. The use of ultrasound image guidance without prior CT and MR imagery promises to improve the efficacy and safety of these procedures in an affordable manner. We propose to create a statistical shape model of the lumbar spine and warp this atlas to patient-specific ultrasound images during the needle placement procedure. From CT image volumes of 35 patients, statistical shape model of the L3 vertebra is built, including mean shape and main modes of variation. This shape model is registered to the ultrasound data by simultaneously optimizing the parameters of the model and its relative pose. Ground-truth data was established by printing 3D anatomical models of 3 patients using a rapid prototyping. CT and ultrasound data of these models were registered using fiducial markers. Pairwise registration of the statistical shape model and 3D ultrasound images led to a mean target registration error of 3.4 mm, while 81% of all cases yielded clinically acceptable accuracy below the 3.5 mm threshold.

Automatic 3D Shape Severity Quantification and Localization for Deformational Plagiocephaly

February 2009

·

362 Reads

Recent studies have shown an increase in the occurrence of deformational plagiocephaly and brachycephaly in children. This increase has coincided with the "Back to Sleep" campaign that was introduced to reduce the risk of Sudden Infant Death Syndrome (SIDS). However, there has yet to be an objective quantification of the degree of severity for these two conditions. Most diagnoses are done on subjective factors such as patient history and physician examination. The existence of an objective quantification would help research in areas of diagnosis and intervention measures, as well as provide a tool for finding correlation between the shape severity and cognitive outcome. This paper describes a new shape severity quantification and localization method for deformational plagiocephaly and brachycephaly. Our results show that there is a positive correlation between the new shape severity measure and the scores entered by a human expert.

Automatic 3D Segmentation of the Kidney in MR Images Using Wavelet Feature Extraction and Probability Shape Model

February 2013

·

433 Reads

Numerical estimation of the size of the kidney is useful in evaluating conditions of the kidney, especially, when serial MR imaging is performed to evaluate the kidney function. This paper presents a new method for automatic segmentation of the kidney in three-dimensional (3D) MR images, by extracting texture features and statistical matching of geometrical shape of the kidney. A set of Wavelet-based support vector machines (W-SVMs) is trained on the MR images. The W-SVMs capture texture priors of MRI for classification of the kidney and non-kidney tissues in different zones around the kidney boundary. In the segmentation procedure, these W-SVMs are trained to tentatively label each voxel around the kidney model as a kidney or non-kidney voxel by texture matching. A probability kidney model is created using 10 segmented MRI data. The model is initially localized based on the intensity profiles in three directions. The weight functions are defined for each labeled voxel for each Wavelet-based, intensity-based, and model-based label. Consequently, each voxel has three labels and three weights for the Wavelet feature, intensity, and probability model. Using a 3D edge detection method, the model is re-localized and the segmented kidney is modified based on a region growing method in the model region. The probability model is re-localized based on the results and this loop continues until the segmentation converges. Experimental results with mouse MRI data show the good performance of the proposed method in segmenting the kidney in MR images.

Figure 1. 
3D single molecule tracking of quantum-dot labeled antibody molecules using multifocal plane microscopy

February 2010

·

107 Reads

Single molecule tracking in three dimensions (3D) in a live cell environment promises to reveal important new insights into cell biological mechanisms. However, classical microscopy techniques suffer from poor depth discrimination which severely limits single molecule tracking in 3D with high temporal and spatial resolution. We introduced a novel imaging modality, multifocal plane microscopy (MUM) for the study of subcellular dynamics in 3D. We have shown that MUM provides a powerful approach with which single molecules can be tracked in 3D in live cells. MUM allows for the simultaneous imaging at different focal planes, thereby ensuring that trajectories can be imaged continuously at high temporal resolution. A critical requirement for 3D single molecule tracking as well as localization based 3D super-resolution imaging is high 3D localization accuracy. MUM overcomes the depth discrimination problem of classical microscopy based approaches and supports high accuracy 3D localization of singe molecule/particles. In this way, MUM opens the way for high precision 3D single molecule tracking and 3D super-resolution imaging within a live cell environment. We have used MUM to reveal complex intracellular pathways that could not be imaged with classical approaches. In particular we have tracked quantum dot labeled antibody molecules in the exo/endocytic pathway from the cell interior to the plasma membrane at the single molecule level. Here, we present a brief review of these results.

Figure 1. 
Figure 2. 
Figure 3. 
A Molecular Image-directed, 3D Ultrasound-guided Biopsy System for the Prostate

June 2012

·

96 Reads

Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound image-guided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% ± 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients.

Figure 1. AnCoR Flowchart. (1) Manually segment anatomic regions: prostate (white, yellow), and central gland (red, pink); (2) Map cancer (blue) from histology to MP-MRI 14 ; (3) Perform affine registration constraining Pr and CG boundaries; (4) Update atlas by averaging T2-w MRI from (3) and use as registration fixed image; perform FFD registration, constraining both Pr and CG using equal weights; (5) Identify cancer spatial distribution.
Figure 2.
Figure 3.
Statistical 3D Prostate Imaging Atlas Construction via Anatomically Constrained Registration

March 2013

·

84 Reads

Statistical imaging atlases allow for integration of information from multiple patient studies collected across different image scales and modalities, such as multi-parametric (MP) MRI and histology, providing population statistics regarding a specific pathology within a single canonical representation. Such atlases are particularly valuable in the identification and validation of meaningful imaging signatures for disease characterization in vivo within a population. Despite the high incidence of prostate cancer, an imaging atlas focused on different anatomic structures of the prostate, i.e. an anatomic atlas, has yet to be constructed. In this work we introduce a novel framework for MRI atlas construction that uses an iterative, anatomically constrained registration (AnCoR) scheme to enable the proper alignment of the prostate (Pr) and central gland (CG) boundaries. Our current implementation uses endorectal, 1.5T or 3T, T2-weighted MRI from 51 patients with biopsy confirmed cancer; however, the prostate atlas is seamlessly extensible to include additional MRI parameters. In our cohort, radical prostatectomy is performed following MP-MR image acquisition; thus ground truth annotations for prostate cancer are available from the histological specimens. Once mapped onto MP-MRI through elastic registration of histological slices to corresponding T2-w MRI slices, the annotations are utilized by the AnCoR framework to characterize the 3D statistical distribution of cancer per anatomic structure. Such distributions are useful for guiding biopsies toward regions of higher cancer likelihood and understanding imaging profiles for disease extent in vivo. We evaluate our approach via the Dice similarity coefficient (DSC) for different anatomic structures (delineated by expert radiologists): Pr, CG and peripheral zone (PZ). The AnCoR-based atlas had a CG DSC of 90.36%, and Pr DSC of 89.37%. Moreover, we evaluated the deviation of anatomic landmarks, the urethra and veromontanum, and found 3.64 mm and respectively 4.31 mm. Alternative strategies that use only the T2-w MRI or the prostate surface to drive the registration were implemented as comparative approaches. The AnCoR framework outperformed the alternative strategies by providing the lowest landmark deviations.

Figure 1. Domain and Range Weighting Functions 
Figure 2. Optimal 3D Bilateral Filters 
Figure 3. Application of the Optimal 3D Bilateral Filters to In Vivo Data 
Figure 4. Optimal 4D Bilateral Filters 
Figure 5. Application of the Optimal 4D Bilateral Filters to In Vivo Data 
Denoising of 4D Cardiac Micro-CT Data Using Median-Centric Bilateral Filtration

February 2012

·

309 Reads

Bilateral filtration has proven an effective tool for denoising CT data. The classic filter utilizes Gaussian domain and range weighting functions in 2D. More recently, other distributions have yielded more accurate results in specific applications, and the bilateral filtration framework has been extended to higher dimensions. In this study, brute-force optimization is employed to evaluate the use of several alternative distributions for both domain and range weighting: Andrew's Sine Wave, El Fallah Ford, Gaussian, Flat, Lorentzian, Huber's Minimax, Tukey's Bi-weight, and Cosine. Two variations on the classic bilateral filter which use median filtration to reduce bias in range weights are also investigated: median-centric and hybrid bilateral filtration. Using the 4D MOBY mouse phantom reconstructed with noise (stdev. ~ 65 HU), hybrid bilateral filtration, a combination of the classic and median-centric filters, with Flat domain and range weighting is shown to provide optimal denoising results (PSNRs: 31.69, classic; 31.58 median-centric; 32.25, hybrid). To validate these phantom studies, the optimal filters are also applied to in vivo, 4D cardiac micro-CT data acquired in the mouse. In a constant region of the left ventricle, hybrid bilateral filtration with Flat domain and range weighting is shown to provide optimal smoothing (stdev: original, 72.2 HU; classic, 20.3 HU; median-centric, 24.1 HU; hybrid, 15.9 HU). While the optimal results were obtained using 4D filtration, the 3D hybrid filter is ultimately recommended for denoising 4D cardiac micro-CT data because it is more computationally tractable and less prone to artifacts (MOBY PSNR: 32.05; left ventricle stdev: 20.5 HU).

Figure 1. Segmentation and shape modeling. First three images from the left: the axial, coronal and sagittal views of the binary segmentation of the EE (white) on top of the EI (gray) phase; Two images on the right: The PDMs, composed of 512 points, shown on the surface of the lungs at EE phase and at EI phase with group-wise correspondence indicated by colors. 
Figure 5. Left: The second session EI phase image is shown on top of the first session EI phase Image: a coronal view, the intersection region is the darker region on the bottom part of the lungs; Right: The PDM of the partial lung shape, composed of 64 surface points, shown with the surface of the lungs at EE phase and at EI phase with group-wise correspondence.
Shape-correlated Deformation Statistics for Respiratory Motion Prediction in 4D Lung

February 2010

·

101 Reads

4D image-guided radiation therapy (IGRT) for free-breathing lungs is challenging due to the complicated respiratory dynamics. Effective modeling of respiratory motion is crucial to account for the motion affects on the dose to tumors. We propose a shape-correlated statistical model on dense image deformations for patient-specic respiratory motion estimation in 4D lung IGRT. Using the shape deformations of the high-contrast lungs as the surrogate, the statistical model trained from the planning CTs can be used to predict the image deformation during delivery verication time, with the assumption that the respiratory motion at both times are similar for the same patient. Dense image deformation fields obtained by diffeomorphic image registrations characterize the respiratory motion within one breathing cycle. A point-based particle optimization algorithm is used to obtain the shape models of lungs with group-wise surface correspondences. Canonical correlation analysis (CCA) is adopted in training to maximize the linear correlation between the shape variations of the lungs and the corresponding dense image deformations. Both intra- and inter-session CT studies are carried out on a small group of lung cancer patients and evaluated in terms of the tumor location accuracies. The results suggest potential applications using the proposed method.

Near-infrared image-guided laser ablation of artificial caries lesions - art. no. 64250T

February 2007

·

82 Reads

Laser removal of dental hard tissue can be combined with optical, spectral or acoustic feedback systems to selectively ablate dental caries and restorative materials. Near-infrared (NIR) imaging has considerable potential for the optical discrimination of sound and demineralized tissue. The objective of this study was to test the hypothesis that two-dimensional NIR images of demineralized tooth surfaces can be used to guide CO(2) laser ablation for the selective removal of artificial caries lesions. Highly patterned artificial lesions were produced by submerging 5 × 5 mm(2) bovine enamel samples in demineralized solution for a 9-day period while sound areas were protected with acid resistant varnish. NIR imaging and polarization sensitive optical coherence tomography (PS-OCT) were used to acquire depth-resolved images at a wavelength of 1310-nm. An imaging processing module was developed to analyze the NIR images and to generate optical maps. The optical maps were used to control a CO(2) laser for the selective removal of the lesions at a uniform depth. This experiment showed that the patterned artificial lesions were removed selectively using the optical maps with minimal damage to sound enamel areas. Post-ablation NIR and PS-OCT imaging confirmed that demineralized areas were removed while sound enamel was conserved. This study successfully demonstrated that near-IR imaging can be integrated with a CO(2) laser ablation system for the selective removal of dental caries.

Figure 7. Intensity-based segmentation of the ventricles using mean and variance (time: < 100ms): initial, full numerical implementation, proposed discrete implementation (left to right) . As expected, the discrete approximation captures all but the sub-pixel width tissue separating ventricles. 
Figure 8. Ventricle segmentation using mean (time: ∼ 400 ms). Initialized from single bubble placed in each ventricle. Results are shown from two views: front, side (left to right) . 
Figure 10. Circle shown at various time steps while shrinking inward under unit speed: full numerical implementation and discrete approximation (left to right) . In both techniques the circle disappears to a point as expected. 
Fast approximate surface evolution in arbitrary dimension - art. no. 69144C

March 2008

·

92 Reads

The level set method is a popular technique used in medical image segmentation; however, the numerics involved make its use cumbersome. This paper proposes an approximate level set scheme that removes much of the computational burden while maintaining accuracy. Abandoning a floating point representation for the signed distance function, we use integral values to represent the signed distance function. For the cases of 2D and 3D, we detail rules governing the evolution and maintenance of these three regions. Arbitrary energies can be implemented in the framework. This scheme has several desirable properties: computations are only performed along the zero level set; the approximate distance function requires only a few simple integer comparisons for maintenance; smoothness regularization involves only a few integer calculations and may be handled apart from the energy itself; the zero level set is represented exactly removing the need for interpolation off the interface; and evolutions proceed on the order of milliseconds per iteration on conventional uniprocessor workstations. To highlight its accuracy, flexibility and speed, we demonstrate the technique on intensity-based segmentations under various statistical metrics. Results for 3D imagery show the technique is fast even for image volumes.

Figure 1. Quantitative fractional anisotropy (QA) maps computed for single-and multi-shell DWI acquisitions in the mouse brain. The contrast to noise ratio visibly increases with the b-value shell, while the highest signal to noise ratio (SNR) is found in the lowest b-values shells (especially at b-value=1000 s/mm 2 ). 
Figure 2. The signal to noise ratio (SNR) of quantitative anisotropy (QA) in single and multi-shell DWIs in the mouse cingulum is plotted. The same set of angular samples was used for each single-and multi-shell reconstruction. The lowest diffusion weighting gives the highest SNR. 
7T Multi-shell Hybrid Diffusion Imaging (HYDI) for Mapping Brain Connectivity in Mice

March 2015

·

337 Reads

Diffusion weighted imaging (DWI) is widely used to study microstructural characteristics of the brain. High angular resolution diffusion imaging (HARDI) samples diffusivity at a large number of spherical angles, to better resolve neural fibers that mix or cross. Here, we implemented a framework for advanced mathematical analysis of mouse 5-shell HARDI (b=1000, 3000, 4000, 8000, 12000 s/mm(2)), also known as hybrid diffusion imaging (HYDI). Using q-ball imaging (QBI) at ultra-high field strength (7 Tesla), we computed diffusion and fiber orientation distribution functions (dODF, fODF) to better detect crossing fibers. We also computed a quantitative anisotropy (QA) index, and deterministic tractography, from the peak orientation of the fODFs. We found that the signal to noise ratio (SNR) of the QA was significantly higher in single and multi-shell reconstructed data at the lower b-values (b=1000, 3000, 4000 s/mm(2)) than at higher b-values (b=8000, 12000 s/mm(2)); the b=1000 s/mm(2) shell increased the SNR of the QA in all multi-shell reconstructions, but when used alone or in <5-shell reconstruction, it led to higher angular error for the major fibers, compared to 5-shell HYDI. Multi-shell data reconstructed major fibers with less error than single-shell data, and was most successful at reducing the angular error when the lowest shell was excluded (b=1000 s/mm(2)). Overall, high-resolution connectivity mapping with 7T HYDI offers great potential for understanding unresolved changes in mouse models of brain disease.

Table 1. Empirical Errors for All Segmented Objects (all metrics in mm) 
Figure 3. Intermediate results of level set method of a single slice. (A) Skin segmentation initialized with a cube surface. (B) Outer abdominal wall segmentation initialized with the result of skin.  
Figure 4. Segmentation results for four subjects. The first column represents the results of bone skeleton and skin. The second column demonstrates the segmentation of outer abdominal wall (green) overlaid with grids of labeled ground truth (red). The third to the sixth columns show the intra-and inter-variability of abdominal wall over slices and the segmentation results of our approach (red).
Automatic Segmentation of Abdominal Wall in Ventral Hernia CT: A Pilot Study

March 2013

·

85 Reads

The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24-43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments; notably, quantitative metrics based on image-processing are not used. We propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention. To date, automated segmentation algorithms have not been presented to quantify the abdominal wall and potential hernias. In this pilot study with four clinically acquired CT scans on post-operative patients, we demonstrate a novel approach to geometric classification of the abdominal wall and essential abdominal features (including bony landmarks and skin surfaces). Our approach uses a hierarchical design in which the abdominal wall is isolated in the context of the skin and bony structures using level set methods. All segmentation results were quantitatively validated with surface errors based on manually labeled ground truth. Mean surface errors for the outer surface of the abdominal wall was less than 2mm. This approach establishes a baseline for characterizing the abdominal wall for improving VH care.

Top-cited authors