Medical Image Analysis

Published by Elsevier
Print ISSN: 1361-8415
Publications
The aim of this study has been to explore and verify whether the use of a previously designed Analysis-by-Synthesis algorithm is capable to precisely measuring implant wear. The abrasion of polyethylene particles is seen as the main reason for the loosening of prosthetic components in the hip. It lies in the sub-millimeter range, and precision is a crucial point in wear measurement. In the Analysis-by-Synthesis algorithm, the synthetic X-ray image of the implant is matched to its original X-ray projection. This intensity based approach and the use of X-ray images with their inherent high resolution allow principally precise measurements. Wear has been defined based on the estimated implant parameters and under minimization of the impact of the main sources of error. The latter was theoretically studied in a sensitivity analysis. The use of the algorithm was tested in vitro as well as in vivo. In experimental data, the accuracy and the impact of the pelvic position and orientation were studied. The precision was assessed using dual radiographs of 20 patients with total hip replacement. A standard deviation of 49 microm was found.
 
For the last 15 years, Magnetic Resonance Imaging (MRI) has become a reference examination for cardiac morphology, function and perfusion in humans. Yet, due to the characteristics of cardiac MR images and to the great variability of the images among patients, the problem of heart cavities segmentation in MRI is still open. This paper is a review of fully and semi-automated methods performing segmentation in short axis images using a cardiac cine MRI sequence. Medical background and specific segmentation difficulties associated to these images are presented. For this particularly complex segmentation task, prior knowledge is required. We thus propose an original categorization for cardiac segmentation methods, with a special emphasis on what level of external information is required (weak or strong) and how it is used to constrain segmentation. After reviewing method principles and analyzing segmentation results, we conclude with a discussion and future trends in this field regarding methodological and medical issues.
 
Editorial of the Medical Image Analysis journal 17 (2013) 711
 
In this paper, we present the concept of diffusing models to perform image-to-image matching. Having two images to match, the main idea is to consider the objects boundaries in one image as semi-permeable membranes and to let the other image, considered as a deformable grid model, diffuse through these interfaces, by the action of effectors situated within the membranes. We illustrate this concept by an analogy with Maxwell's demons. We show that this concept relates to more traditional ones, based on attraction, with an intermediate step being optical flow techniques. We use the concept of diffusing models to derive three different non-rigid matching algorithms, one using all the intensity levels in the static image, one using only contour points, and a last one operating on already segmented images. Finally, we present results with synthesized deformations and real medical images, with applications to heart motion tracking and three-dimensional inter-patients matching.
 
The goal of this study has been to objectively and reliably estimate the precision of measuring 2D migration of hip prostheses. This is the distance change over time between the implant and the bone observable in X-ray images. To reach this goal, a generally valid scheme for determining the standard deviation of distance measurements in 3D to 2D projections has been worked out. The scheme was applied to four previously published methods for measuring the migration of the prosthetic cup using standard radiographs. Applying the scheme yields measures for the sensitivity of the migration measurement towards the relevant sources of error. Inserting previously published data for the amounts of the entering errors, the standard deviation of the migration measurement has been calculated numerically resulting in values up to several millimeters.
 
We present a machine learning approach called shape regression machine (SRM) for efficient segmentation of an anatomic structure that exhibits a deformable shape in a medical image, e.g., left ventricle endocardial wall in an echocardiogram. The SRM achieves efficient segmentation via statistical learning of the interrelations among shape, appearance, and anatomy, which are exemplified by an annotated database. The SRM is a two-stage approach. In the first stage that estimates a rigid shape to solve an automatic initialization problem, it derives a regression solution to object detection that needs just one scan in principle and a sparse set of scans in practice, avoiding the exhaustive scanning required by the state-of-the-art classification-based detection approach while yielding comparable detection accuracy. In the second stage that estimates the nonrigid shape, it again learns a nonlinear regressor to directly associate nonrigid shape with image appearance. The underpinning of both stages is a novel image-based boosting ridge regression (IBRR) method that enables multivariate, nonlinear modeling and accommodates fast evaluation. We demonstrate the efficiency and effectiveness of the SRM using experiments on segmenting the left ventricle endocardium from a B-mode echocardiogram of apical four chamber view. The proposed algorithm is able to automatically detect and accurately segment the LV endocardial border in about 120ms.
 
The fundamental property of the analytic signal is the split of identity, meaning the separation of qualitative and quantitative information in form of the local phase and the local amplitude, respectively. Especially the structural representation, independent of brightness and contrast, of the local phase is interesting for numerous image processing tasks. Recently, the extension of the analytic signal from 1D to 2D, covering also intrinsic 2D structures, was proposed. We show the advantages of this improved concept on ultrasound RF and B-mode images. Precisely, we use the 2D analytic signal for the envelope detection of RF data. This leads to advantages for the extraction of the information-bearing signal from the modulated carrier wave. We illustrate this, first, by visual assessment of the images, and second, by performing goodness-of-fit tests to a Nakagami distribution, indicating a clear improvement of statistical properties. The evaluation is performed for multiple window sizes and parameter estimation techniques. Finally, we show that the 2D analytic signal allows for an improved estimation of local features on B-mode images.
 
Registration of pre- and intra-interventional data is one of the key technologies for image-guided radiation therapy, radiosurgery, minimally invasive surgery, endoscopy, and interventional radiology. In this paper, we survey those 3D/2D data registration methods that utilize 3D computer tomography or magnetic resonance images as the pre-interventional data and 2D X-ray projection images as the intra-interventional data. The 3D/2D registration methods are reviewed with respect to image modality, image dimensionality, registration basis, geometric transformation, user interaction, optimization procedure, subject, and object of registration.
 
Due to their different physical origin, X-ray mammography and Magnetic Resonance Imaging (MRI) provide complementary diagnostic information. However, the correlation of their images is challenging due to differences in dimensionality, patient positioning and compression state of the breast. Our automated registration takes over part of the correlation task. The registration method is based on a biomechanical finite element model, which is used to simulate mammographic compression. The deformed MRI volume can be compared directly with the corresponding mammogram. The registration accuracy is determined by a number of patient-specific parameters. We optimize these parameters - e.g. breast rotation - using image similarity measures. The method was evaluated on 79 datasets from clinical routine. The mean target registration error was 13.2mm in a fully automated setting. On basis of our results, we conclude that a completely automated registration of volume images with 2D mammograms is feasible. The registration accuracy is within the clinically relevant range and thus beneficial for multimodal diagnosis.
 
In this paper we address the problem of spatio-temporal acoustic boundary detection in echocardiography. We propose a phase-based feature detection method to be used as the front end to higher-level 2D+T/3D+T reconstruction algorithms. We develop a 2D+T version of this algorithm and illustrate its performance on some typical echocardiogram sequences. We show how our temporal-based algorithm helps to reduce the number of spurious feature responses due to speckle and provides feature velocity estimates. Further, our approach is intensity-amplitude invariant. This makes it particularly attractive for echocardiographic segmentation, where choosing a single global intensity-based edge threshold is problematic.
 
Accurate alignment of intra-operative X-ray coronary angiography (XA) and pre-operative cardiac CT angiography (CTA) may improve procedural success rates of minimally invasive coronary interventions for patients with chronic total occlusions. It was previously shown that incorporating patient specific coronary motion extracted from 4D CTA increases the robustness of the alignment. However, pre-operative CTA is often acquired with gating at end-diastole, in which case patient specific motion is not available. For such cases, we investigate the possibility of using population based coronary motion models to provide constraints for the 2D+t/3D registration. We propose a methodology for building statistical motion models of the coronary arteries from a training population of 4D CTA datasets. We compare the 2D+t/3D registration performance of the proposed statistical models with other motion estimates, including the patient specific motion extracted from 4D CTA, the mean motion of a population, the predicted motion based on the cardiac shape. The coronary motion models, constructed on a training set of 150 patients, had a generalization accuracy of 1mm root mean square point-to-point distance. Their 2D+t/3D registration accuracy on one cardiac cycle of 12 monoplane XA sequences was similar to, if not better than, the 4D CTA based motion, irrespective of which respiratory model and which feature based 2D/3D distance metric was used. The resulting model based coronary motion estimate showed good applicability for registration of a subsequent cardiac cycle.
 
Fluoroscopy is the mainstay of interventional radiology. However, the images are 2D and visualisation of vasculature requires nephrotoxic contrast. Cone-beam computed tomography is often available, but involves large radiation dose and interruption to clinical workflow. We propose the use of 2D-3D image registration to allow digital tomosynthesis (DTS) slices to be produced using standard fluoroscopy equipment. Our method automatically produces patient-anatomy-specific slices and removes clutter resulting from bones. Such slices could provide additional intraoperative information, offering improved guidance precision. Image acquisition would fit with interventional clinical workflow and would not require a high x-ray dose. Phantom results showed a 1133% contrast-to-noise improvement compared to standard fluoroscopy. Patient results showed our method enabled visualisation of clinically relevant features: outline of the aorta, the aortic bifurcation and some aortic calcifications.
 
It has been shown that the analysis of two dimensional (2D) bone X-ray images based on the fractional Brownian motion (fBm) model is a good indicator for quantifying alterations in the three dimensional (3D) bone micro-architecture. However, this 2D measurement is not a direct assessment of the 3D bone properties. In this paper, we first show that S(3D), the self-similarity parameter of 3D fBm, is linked to S(2D), that of its 2D projection, by S(3D)=S(2D)-0.5. In the light of this theoretical result, we have experimentally examined whether this relation holds for trabecular bone. Twenty one specimens of trabecular bone were derived from frozen human femoral heads. They were digitized using a high resolution mu-CT. Their projections were simulated numerically by summing the data in the three orthogonal directions and both 3D and 2D self-similarity parameters were measured. Results show that the self-similarity of the 3D bone volumes and that of their projections are linked by the previous equation. This demonstrates that a simple projection provides 3D information about the bone structure. This information can be a valuable adjunct to the bone mineral density for the early diagnosis of osteoporosis.
 
Constructing a 3D bone surface model from a limited number of calibrated 2D X-ray images (e.g. 2) and a 3D point distribution model is a challenging task, especially, when we would like to construct a patient-specific surface model of a bone with pathology. One of the key steps for such a 2D/3D reconstruction is to establish correspondences between the 2D images and the 3D model. This paper presents a 2D/3D correspondence building method based on a non-rigid 2D point matching process, which iteratively uses a symmetric injective nearest-neighbor mapping operator and 2D thin-plate splines based deformations to find a fraction of best matched 2D point pairs between features extracted from the X-ray images and those extracted from the 3D model. The estimated point pairs are then used to set up a set of 3D point pairs such that we turn a 2D/3D reconstruction problem to a 3D/3D one, whose solutions are well studied. Incorporating this 2D/3D correspondence building method, a 2D/3D reconstruction scheme combining a statistical instantiation with a regularized shape deformation has been developed. Comprehensive experiments on clinical datasets and on images of cadaveric femurs with both non-pathologic and pathologic cases are designed and conducted to evaluate the performance of the 2D/3D correspondence building method as well as that of the 2D/3D reconstruction scheme. Quantitative and qualitative evaluation results are given, which demonstrate the validity of the present method and scheme.
 
This paper presents an improved method for the detection of "significant" low-level objects in medical images. The method overcomes topological problems where multiple redundant saddle points are detected in digital images. Information derived from watershed regions is used to select and refine saddle points in the discrete domain and to construct the watersheds and watercourses (ridges and valleys). We also demonstrate an improved method of pruning the tessellation by which to define low level objects in zero order images. The algorithm was applied on a set of medical images with promising results. Evaluation was based on theoretical analysis and human observer experiments.
 
We present a new approach to incorporating information from heterogeneous images of migrating cells in 3D gel. We study 3D angiogenic sprouting, where cells burrow into the gel matrix, communicate with other cells and create vascular networks. We combine time-lapse fluorescent images of stained cell nuclei and transmitted light images of the background gel to track cell trajectories. The nuclei images are sampled less frequently due to photo toxicity. Hence, 3D cell tracking can be performed more reliably when 2D sprout profiles, extracted from gel matrix images, are effectively incorporated. We employ a Bayesian filtering approach to optimally combine the two heterogeneous images with different sampling rates. We construct stochastic models to predict cell locations and sprout profiles and condition the likelihood of nuclei location by the sprout profile. The conditional distribution is non-Gaussian and the cell dynamics is non-linear. To jointly update cell and sprout estimates, we use a Rao-Blackwell particle filter. Simulation and experimental results show accurate tracking of multiple cells along with sprout formation, demonstrating synergistic effects of incorporating the two types of images.
 
Biopsy of the prostate using 2D transrectal ultrasound (TRUS) guidance is the current gold standard for diagnosis of prostate cancer; however, the current procedure is limited by using 2D biopsy tools to target 3D biopsy locations. We propose a technique for patient-specific 3D prostate model reconstruction from a sparse collection of non-parallel 2D TRUS biopsy images. Our method conforms to the restrictions of current TRUS biopsy equipment and could be efficiently incorporated into current clinical biopsy procedures for needle guidance without the need for expensive hardware additions. In this paper, the model reconstruction technique is evaluated using simulated biopsy images from 3D TRUS prostate images of 10 biopsy patients. All reconstructed models are compared to their corresponding 3D manually segmented prostate models for evaluation of prostate volume accuracy and surface errors (both regional and global). The number of 2D TRUS biopsy images used for prostate modeling was varied to determine the optimal number of images necessary for accurate prostate surface estimation.
 
Fluorescence tomography of tissues has been generally limited to systems that require fixed geometries or measurements employing fibers. Certain technological advances however, have more recently allowed the development of complete-projection 360 degrees tomographic approaches using non-contact detection and illumination. Employing multiple illumination projections and CCD cameras as detection devices vastly increases the information content acquired, posing non-trivial computational and experimental requirements. In this paper, we use singular-value analysis to optimize experimental parameters relevant to the design and operation of emerging 360 degrees fluorescence molecular tomography (FMT) methods and systems for small animal imaging. We present the theoretical and experimental methodology, optimization results and their experimental validation. We further discuss how these results can be employed to improve the performance of existing FMT systems and guide the design of new systems.
 
This paper presents a series of 3D statistical models of the cortical sulci. They are built from points located automatically over the sulcal fissures, and corresponded automatically using variants on the iterative closest point algorithm. The models are progressively improved by adding in more and more structural and configural information, and the final results are consistent with findings from other anatomical studies. The models can be used to locate and label anatomical features automatically in 3D MR images of the head, for analysis, visualisation, classification, and normalisation.
 
Segmentation is an important part of image processing, which often has a large impact on quantitative image analysis results. Fully automated operator independent segmentation procedures that successfully work in a population with a larger biological variation are extremely difficult to design and usually some kind of operator intervention is required, at least in pathological cases. We developed a variety of 3D editing tools that can be used to correct or improve results of initial automatic segmentation procedures. Specifically we will discuss and show examples for three types of editing tools that we termed: hole-filling (tool 1), point-bridging (tool 2), and surface-dragging (tool 3). Each tool comes in a number of flavors, all of which are implemented in a truly 3D manner. We describe the principles, evaluate efficiency and flexibility, and discuss advantages and disadvantages of each tool. We further demonstrate the superiority of the 3D approach over the time-consuming slice-by-slice editing of 3D datasets, which is still widely used in medical image processing today. We conclude that performance criteria for automatic segmentation algorithms may be eased significantly by including 3D editing tools early in the design process.
 
Quantitative measurements of the progression (or regression) of carotid plaque burden are important in monitoring patients and evaluating new treatment options. 3D ultrasound (US) has been used to monitor the progression of carotid artery plaques in symptomatic and asymptomatic patients, and different methods of measuring various ultrasound phenotypes of atherosclerosis have been developed. We have developed a quantitative metric used to analyze changes in carotid plaque morphology from 3D US. This method matched the vertices on the carotid arterial wall surface with those on the luminal surface. Vessel-wall-plus-plaque thickness (VWT) was obtained by computing the distance between each corresponding pair, which was then superimposed on the arterial wall to produce the VWT map. Since the progression of plaque thickness is important in monitoring patients who are at risk for stroke, we also computed the change of VWT by comparing the VWT maps obtained for a patient at two different time points. In this paper, we propose a technique to flatten the 3D VWT and VWT-Change maps in an area-preserving manner, in order to facilitate the visualization and interpretation of these maps.
 
Statistical shape models (SSMs) have by now been firmly established as a robust tool for segmentation of medical images. While 2D models have been in use since the early 1990 s, wide-spread utilization of three-dimensional models appeared only in recent years, primarily made possible by breakthroughs in automatic detection of shape correspondences. In this article, we review the techniques required to create and employ these 3D SSMs. While we concentrate on landmark-based shape representations and thoroughly examine the most popular variants of Active Shape and Active Appearance models, we also describe several alternative approaches to statistical shape modeling. Structured into the topics of shape representation, model construction, shape correspondence, local appearance models and search algorithms, we present an overview of the current state of the art in the field. We conclude with a survey of applications in the medical field and a discussion of future developments.
 
Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images.
 
Statistical shape modelling potentially provides a powerful tool for generating patient-specific, 3D representations of bony anatomy for computer-aided orthopaedic surgery (CAOS) without the need for a preoperative CT scan. Furthermore, freehand 3D ultrasound (US) provides a non-invasive method for digitising bone surfaces in the operating theatre that enables a much greater region to be sampled compared with conventional direct-contact (i.e., pointer-based) digitisation techniques. In this paper, we describe how these approaches can be combined to simultaneously generate and register a patient-specific model of the femur and pelvis to the patient during surgery. In our implementation, a statistical deformation model (SDM) was constructed for the femur and pelvis by performing a principal component analysis on the B-spline control points that parameterise the freeform deformations required to non-rigidly register a training set of CT scans to a carefully segmented template CT scan. The segmented template bone surface, represented by a triangulated surface mesh, is instantiated and registered to a cloud of US-derived surface points using an iterative scheme in which the weights corresponding to the first five principal modes of variation of the SDM are optimised in addition to the rigid-body parameters. The accuracy of the method was evaluated using clinically realistic data obtained on three intact human cadavers (three whole pelves and six femurs). For each bone, a high-resolution CT scan and rigid-body registration transformation, calculated using bone-implanted fiducial markers, served as the gold standard bone geometry and registration transformation, respectively. After aligning the final instantiated model and CT-derived surfaces using the iterative closest point (ICP) algorithm, the average root-mean-square distance between the surfaces was 3.5mm over the whole bone and 3.7mm in the region of surgical interest. The corresponding distances after aligning the surfaces using the marker-based registration transformation were 4.6 and 4.5mm, respectively. We conclude that despite limitations on the regions of bone accessible using US imaging, this technique has potential as a cost-effective and non-invasive method to enable surgical navigation during CAOS procedures, without the additional radiation dose associated with performing a preoperative CT scan or intraoperative fluoroscopic imaging. However, further development is required to investigate errors using error measures relevant to specific surgical procedures.
 
Coronary artery diseases are usually revealed using X-ray angiographies. Such images are complex to analyze because they provide a 2D projection of a 3D object. Medical diagnosis suffers from inter- and intra-clinician variability. Therefore, reliable software for the 3D reconstruction and labeling of the coronary tree is strongly desired. It requires the matching of the vessels in the different available angiograms, and an approach which identifies the arteries by their anatomical names is a way to solve this difficult problem. This paper focuses on the automatic labeling of the left coronary tree in X-ray angiography. Our approach is based on a 3D topological model, built from the 3D anthropomorphic phantom, Coronix. The phantom is projected under different angles of view to provide a data base of 2D topological models. On the other hand, the vessel skeleton is extracted from the patient's angiogram. The algorithm compares the skeleton with the 2D topological model which has the most similar vascular net shape. The method performs in a hierarchical manner, first labeling the main artery, then the sub-branches. It handles inter-individual anatomical variations, segmentation errors and image ambiguities. We tested the method on standard angiograms of Coronix and on clinical examinations of nine patients. We demonstrated successful scores of 90% correct labeling for the main arteries and 60% for the sub-branches. The method appears to be particularly efficient for the arteries in focus. It is therefore a very promising tool for the automatic 3D reconstruction of the coronary tree from monoplane temporal angiographic clinical sequences.
 
We propose a technique to obtain accurate and smooth surfaces of patient specific vascular structures, using two steps: segmentation and reconstruction. The first step provides accurate and smooth centerlines of the vessels, together with cross section orientations and cross section fitting. The initial centerlines are obtained from a homotopic thinning of the vessels segmented using a level set method. In addition to circle fitting, an iterative scheme fitting ellipses to the cross sections and correcting the centerline positions is proposed, leading to a strong improvement of the cross section orientations and of the location of the centerlines. The second step consists of reconstructing the surface based on this data, by generating a set of topologically preserved quadrilateral patches of branching tubular structures. It improves Felkel's meshing method (Felkel et al., 2004) by: allowing a vessel to have multiple parents and children, reducing undersampling artifacts, and adapting the cross section distribution. Experiments, on phantom and real datasets, show that the proposed technique reaches a good balance in terms of smoothness, number of triangles, and distance error. This technique can be applied in interventional radiology simulations, virtual endoscopy and in reconstruction of smooth and accurate three-dimensional models for use in simulation.
 
The aim of this article is to build trajectories for virtual endoscopy inside 3D medical images, using the most automatic way. Usually the construction of this trajectory is left to the clinician who must define some points on the path manually using three orthogonal views. But for a complex structure such as the colon, those views give little information on the shape of the object of interest. The path construction in 3D images becomes a very tedious task and precise a priori knowledge of the structure is needed to determine a suitable trajectory. We propose a more automatic path tracking method to overcome those drawbacks: we are able to build a path, given only one or two end points and the 3D image as inputs. This work is based on previous work by Cohen and Kimmel [Int. J. Comp. Vis. 24 (1) (1997) 57] for extracting paths in 2D images using Fast Marching algorithm. Our original contribution is twofold. On the first hand, we present a general technical contribution which extends minimal paths to 3D images and gives new improvements of the approach that are relevant in 2D as well as in 3D to extract linear structures in images. It includes techniques to make the path extraction scheme faster and easier, by reducing the user interaction. We also develop a new method to extract a centered path in tubular structures. Synthetic and real medical images are used to illustrate each contribution. On the other hand, we show that our method can be efficiently applied to the problem of finding a centered path in tubular anatomical structures with minimum interactivity, and that this path can be used for virtual endoscopy. Results are shown in various anatomical regions (colon, brain vessels, arteries) with different 3D imaging protocols (CT, MR).
 
Minimally invasive interventions are often performed under fluoroscopic guidance. Drawbacks of fluoroscopic guidance are the fact that the presented images are 2D projections and that both the patient and the clinician are exposed to radiation. Image-guided navigation using pre-interventionally acquired 3D MR or CT data is an alternative. However, this often requires invasive anatomical landmark-based, marker-based or surface-based image-to-patient registration. In this paper, a coupling between an image-guided navigation system and an intraoperative C-arm X-ray device with 3D imaging capabilities (3D rotational X-ray (3DRX) system) that enables direct navigation without invasive image-to-patient registration on 3DRX volumes, is described and evaluated. The coupling is established in a one-time preoperative calibration procedure. The individual steps in the registration procedure are explained and evaluated. The acquired navigation accuracy using this coupling is approximately one millimeter.
 
The detection of brain aneurysms plays a key role in reducing the incidence of intracranial subarachnoid hemorrhage (SAH) which carries a high rate of morbidity and mortality. The majority of non-traumatic SAH cases is caused by ruptured intracranial aneurysms and accurate detection can decrease a significant proportion of misdiagnosed cases. A scheme for automated detection of intracranial aneurysms is proposed in this study. Applied to the segmented cerebral vasculature, the method detects aneurysms as suspect regions on the vascular tree, and is designed to assist diagnosticians with their interpretations and thus reduce missed detections. In the current approach, the vessels are segmented and their medial axis is computed. Small regions along the vessels are inspected and the writhe number is introduced as a new surface descriptor to quantify how closely any given region approximates a tubular structure. Aneurysms are detected as non-tubular regions of the vascular tree. The geometric assumptions underlying the approach are investigated analytically and validated experimentally. The method is tested on 3D-rotational angiography (3D-RA) and computed tomography angiography (CTA). In our experiments, 100% sensitivity was achieved with average false positives rates of 0.66 per study on 3D-RA data and 5.36 false positive rates per study on CTA data.
 
In neurobiology, the 3D reconstruction of neurons followed by the identification of dendritic spines is essential for studying neuronal morphology, function and biophysical properties. Most existing methods suffer from problems of low reliability, poor accuracy and require much user interaction. In this paper, we present a method to reconstruct dendrites using a surface representation of the neuron. The skeleton of the dendrite is extracted by a procedure based on the medial geodesic function that is robust and topology preserving, and it is used to accurately identify spines. The sensitivity of the algorithm on the various parameters is explored in detail and the method is shown to be robust.
 
The traditional Hessian-related vessel filters often suffer from detecting complex structures like bifurcations due to an over-simplified cylindrical model. To solve this problem, we present a shape-tuned strain energy density function to measure vessel likelihood in 3D medical images. This method is initially inspired by established stress-strain principles in mechanics. By considering the Hessian matrix as a stress tensor, the three invariants from orthogonal tensor decomposition are used independently or combined to formulate distinctive functions for vascular shape discrimination, brightness contrast and structure strength measuring. Moreover, a mathematical description of Hessian eigenvalues for general vessel shapes is obtained, based on an intensity continuity assumption, and a relative Hessian strength term is presented to ensure the dominance of second-order derivatives as well as suppress undesired step-edges. Finally, we adopt the multi-scale scheme to find an optimal solution through scale space. The proposed method is validated in experiments with a digital phantom and non-contrast-enhanced pulmonary CT data. It is shown that our model performed more effectively in enhancing vessel bifurcations and preserving details, compared to three existing filters.
 
Accurate cortical thickness estimation is important for the study of many neurodegenerative diseases. Many approaches have been previously proposed, which can be broadly categorised as mesh-based and voxel-based. While the mesh-based approaches can potentially achieve subvoxel resolution, they usually lack the computational efficiency needed for clinical applications and large database studies. In contrast, voxel-based approaches, are computationally efficient, but lack accuracy. The aim of this paper is to propose a novel voxel-based method based upon the Laplacian definition of thickness that is both accurate and computationally efficient. A framework was developed to estimate and integrate the partial volume information within the thickness estimation process. Firstly, in a Lagrangian step, the boundaries are initialized using the partial volume information. Subsequently, in an Eulerian step, a pair of partial differential equations are solved on the remaining voxels to finally compute the thickness. Using partial volume information significantly improved the accuracy of the thickness estimation on synthetic phantoms, and improved reproducibility on real data. Significant differences in the hippocampus and temporal lobe between healthy controls (NC), mild cognitive impaired (MCI) and Alzheimer's disease (AD) patients were found on clinical data from the ADNI database. We compared our method in terms of precision, computational speed and statistical power against the Eulerian approach. With a slight increase in computation time, accuracy and precision were greatly improved. Power analysis demonstrated the ability of our method to yield statistically significant results when comparing AD and NC. Overall, with our method the number of samples is reduced by 25% to find significant differences between the two groups.
 
Model-based segmentation facilitates the accurate measurement of geometric properties of anatomy from ultrasound images. Regularization of the model surface is typically necessary due to the presence of noisy and incomplete boundaries. When simple regularizers are insufficient, linear basis shape models have been shown to be effective. However, for problems such as right ventricle (RV) segmentation from 3D+t echocardiography, where dense consistent landmarks and complete boundaries are absent, acquiring accurate training surfaces in dense correspondence is difficult. As a solution, this paper presents a framework which performs joint segmentation of multiple 3D+t sequences while simultaneously optimizing an underlying linear basis shape model. In particular, the RV is represented as an explicit continuous surface, and segmentation of all frames is formulated as a single continuous energy minimization problem. Shape information is automatically shared between frames, missing boundaries are implicitly handled, and only coarse surface initializations are necessary. The framework is demonstrated to successfully segment both multiple-view and multiple-subject collections of 3D+t echocardiography sequences, and the results confirm that the linear basis shape model is an effective model constraint. Furthermore, the framework is shown to achieve smaller segmentation errors than a state-of-art commercial semi-automatic RV segmentation package. Copyright © 2014 Elsevier B.V. All rights reserved.
 
In this paper, we propose a new strategy for modelling sliding conditions when registering 3D images in a piecewise-diffeomorphic framework. More specifically, our main contribution is the development of a mathematical formalism to perform Large Deformation Diffeomorphic Metric Mapping registration with sliding conditions. We also show how to adapt this formalism to the LogDemons diffeomorphic registration framework. We finally show how to apply this strategy to estimate the respiratory motion between 3D CT pulmonary images. Quantitative tests are performed on 2D and 3D synthetic images, as well as on real 3D lung images from the MICCAI EMPIRE10 challenge. Results show that our strategy estimates accurate mappings of entire 3D thoracic image volumes that exhibit a sliding motion, as opposed to conventional registration methods which are not capable of capturing discontinuous deformations at the thoracic cage boundary. They also show that although the deformations are not smooth across the location of sliding conditions, they are almost always invertible in the whole image domain. This would be helpful for radiotherapy planning and delivery.
 
This paper presents a new registration algorithm, called Temporal Diffeomorphic Free Form Deformation (TDFFD), and its application to motion and strain quantification from a sequence of 3D ultrasound (US) images. The originality of our approach resides in enforcing time consistency by representing the 4D velocity field as the sum of continuous spatiotemporal B-Spline kernels. The spatiotemporal displacement field is then recovered through forward Eulerian integration of the non-stationary velocity field. The strain tensor is computed locally using the spatial derivatives of the reconstructed displacement field. The energy functional considered in this paper weighs two terms: the image similarity and a regularization term. The image similarity metric is the sum of squared differences between the intensities of each frame and a reference one. Any frame in the sequence can be chosen as reference. The regularization term is based on the incompressibility of myocardial tissue. TDFFD was compared to pairwise 3D FFD and 3D+t FFD, both on displacement and velocity fields, on a set of synthetic 3D US images with different noise levels. TDFFD showed increased robustness to noise compared to these two state-of-the-art algorithms. TDFFD also proved to be more resistant to a reduced temporal resolution when decimating this synthetic sequence. Finally, this synthetic dataset was used to determine optimal settings of the TDFFD algorithm. Subsequently, TDFFD was applied to a database of cardiac 3D US images of the left ventricle acquired from 9 healthy volunteers and 13 patients treated by Cardiac Resynchronization Therapy (CRT). On healthy cases, uniform strain patterns were observed over all myocardial segments, as physiologically expected. On all CRT patients, the improvement in synchrony of regional longitudinal strain correlated with CRT clinical outcome as quantified by the reduction of end-systolic left ventricular volume at follow-up (6 and 12months), showing the potential of the proposed algorithm for the assessment of CRT.
 
Freehand 3D ultrasound is particularly appropriate for the measurement of organ volumes. For small organs, which can be fully examined with a single sweep of the ultrasound probe, the results are known to be much more accurate than those using conventional 2D ultrasound. However, large or complex shaped organs are difficult to quantify in this manner because multiple sweeps are required to cover the entire organ. Typically, there are significant registration errors between the various sweeps, which generate artifacts in an interpolated voxel array, making segmentation of the organ very difficult. This paper describes how sequential freehand 3D ultrasound, which does not employ an interpolated voxel array, can be used to measure the volume of large organs. Partial organ cross-sections can be segmented in the original B-scans, and then combined, without the need for image-based registration, to give the organ volume. The inherent accuracy (not including position sensor and segmentation errors) is demonstrated in simulation to be within +/- 2%. The in vivo precision of the complete system is demonstrated (by repeated observations of a human liver) to be +/- 5%.
 
Creating a feature-preserving average of three dimensional anatomical surfaces extracted from volume image data is a complex task. Unlike individual images, averages present right-left symmetry and smooth surfaces which give insight into typical proportions. Averaging multiple biological surface images requires careful superimposition and sampling of homologous regions. Our approach to biological surface image averaging grows out of a wireframe surface tessellation approach by Cutting et al. (1993). The surface delineating wires represent high curvature crestlines. By adding tile boundaries in flatter areas the 3D image surface is parametrized into anatomically labeled (homology mapped) grids. We extend the Cutting et al. wireframe approach by encoding the entire surface as a series of B-spline space curves. The crestline averaging algorithm developed by Cutting et al. may then be used for the entire surface. Shape preserving averaging of multiple surfaces requires careful positioning of homologous surface regions such as these B-spline space curves. We test the precision of this new procedure and its ability to appropriately position groups of surfaces in order to produce a shape-preserving average. Our result provides an average that well represents the source images and may be useful clinically as a deformable model or for animation.
 
We present a new level-set based method to segment and quantify stenosed internal carotid arteries (ICAs) in 3D contrast-enhanced computed tomography angiography (CTA). Within these data sets it is a difficult task to evaluate the degree of stenoses deterministically even for the experienced physician because the actual vessel lumen is hardly distinguishable from calcified plaque and there is no sharp border between lumen and arterial wall. According to our knowledge no commercially available software package allows the detection of the boundary between lumen and plaque components. Therefore in the clinical environment physicians have to perform the evaluation manually. This approach suffers from both intra- and inter-observer variability. The limitation of the manual approach requires the development of a semi-automatic method that is able to achieve deterministic segmentation results of the internal carotid artery via level-set techniques. With the new method different kinds of plaques were almost completely excluded from the segmented regions. For an objective evaluation we also studied the method's performance with four different phantom data sets for which the ground truth of the degree of stenosis was known a priori. Finally, we applied the method to 10 ICAs and compared the obtained segmentations with manual measurements of three physicians.
 
The current research and development of 2D (matrix-shaped) transducer arrays to acquire 3D ultrasound data sets provides new insights into medical ultrasound applications and in particular into elastography. Until very recently, tissue strain estimation techniques commonly used in elastography were mainly 1D or 2D methods. In this paper, a 3D technique estimating biological soft tissue deformation under load from ultrasound radiofrequency volume acquisitions is introduced. This method locally computes axial strains, while considering lateral and elevational motions. Optimal deformation parameters are estimated as those maximizing a similarity criterion, defined as the normalized correlation coefficient, between an initial region and its deformed version, when the latter is compensated for according to these parameters. The performance of our algorithm was assessed with numerical data reproducing the configuration of breast cancer, as well as a physical phantom mimicking a pressure ulcer. Simulation results show that the estimated strain fields are very close to the theoretical values, perfectly discriminating between the harder lesion and the surrounding medium. Experimental strain images of the physical phantom demonstrated the different structures of the medium, even though they are not all detectable on the ultrasound scans. Finally, both simulated and experimental results demonstrate the ability of our algorithm to provide good-quality elastograms, even in the conditions of significant out-of-plane motion.
 
In the last 20years, 3D angiographic imaging has proven its usefulness in the context of various clinical applications. However, angiographic images are generally difficult to analyse due to their size and the complexity of the data that they represent, as well as the fact that useful information is easily corrupted by noise and artifacts. Therefore, there is an ongoing necessity to provide tools facilitating their visualisation and analysis, while vessel segmentation from such images remains a challenging task. This article presents new vessel segmentation and filtering techniques, relying on recent advances in mathematical morphology. In particular, methodological results related to spatially variant mathematical morphology and connected filtering are stated, and included in an angiographic data processing framework. These filtering and segmentation methods are evaluated on real and synthetic 3D angiographic data.
 
A multiple hypothesis tracking approach to the segmentation of small 3D vessel structures is presented. By simultaneously tracking multiple hypothetical vessel trajectories, low contrast passages can be traversed, leading to an improved tracking performance in areas of low contrast. This work also contributes a novel mathematical vessel template model, with which an accurate vessel centerline extraction is obtained. The tracking is fast enough for interactive segmentation and can be combined with other segmentation techniques to form robust hybrid methods. This is demonstrated by segmenting both the liver arteries in CT angiography data, which is known to pose great challenges, and the coronary arteries in 32 CT cardiac angiography data sets in the Rotterdam Coronary Artery Algorithm Evaluation Framework, for which ground-truth centerlines are available.
 
Top-cited authors
Ron Kikinis
  • Harvard Medical School
Mark Jenkinson
  • University of Oxford
Stephen Smith
  • The New South Wales Department of Health
Ben Glocker
  • Imperial College London
Guido Gerig
  • NYU Tandon School of Engineering