Article

A Robust Similarity Measure for Volumetric Image Registration with Outliers

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Image registration under challenging realistic conditions is a very important area of research. In this paper, we focus on algorithms that seek to densely align two volumetric images according to a global similarity measure. Despite intensive research in this area, there is still a need for similarity measures that are robust to outliers common to many different types of images. For example, medical image data is often corrupted by intensity inhomogeneities and may contain outliers in the form of pathologies. In this paper we propose a global similarity measure that is robust to both intensity inhomogeneities and outliers without requiring prior knowledge of the type of outliers. We combine the normalised gradients of images with the cosine function and show that it is theoretically robust against a very general class of outliers. Experimentally, we verify the robustness of our measures within two distinct algorithms. Firstly, we embed our similarity measures within a proof-of-concept extension of the Lucas-Kanade algorithm for volumetric data. Finally, we embed our measures within a popular non-rigid alignment framework based on free-form deformations and show it to be robust against both simulated tumours and intensity inhomogeneities.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... All rigid intra-subject registrations were performed using Normalised Mutual Information (NMI) [67] with 64 histogram bins as similarity measure. The non-linear registrations of the age-specific T1 template to T1 subject sequences used a cosine similarity measure based on normalised gradient fields (cosNGF) [68], a transformation model based on B-spline free-form deformations [69], an image pyramid of 4 levels, bending energy (BE) as regularisation term, and 5 mm final control point spacing. The energy term weight distribution was set to 0.995 cosNGF +0.005 BE. ...
... The energy term weight distribution was set to 0.995 cosNGF +0.005 BE. The cosine similarity measure that we use is designed to be much less sensitive than standard similarities to missing correspondences [68], such as those introduced by the presence of haematoma and oedema. ...
Article
Full-text available
Background: Spontaneous intracerebral haemorrhage (SICH) is a common condition with high morbidity and mortality. Segmentation of haematoma and perihaematoma oedema on medical images provides quantitative outcome measures for clinical trials and may provide important markers of prognosis in people with SICH. Methods: We take advantage of improved contrast seen on magnetic resonance (MR) images of patients with acute and early subacute SICH and introduce an automated algorithm for haematoma and oedema segmentation from these images. To our knowledge, there is no previously proposed segmentation technique for SICH that utilises MR images directly. The method is based on shape and intensity analysis for haematoma segmentation and voxel-wise dynamic thresholding of hyper-intensities for oedema segmentation. Results: Using Dice scores to measure segmentation overlaps between labellings yielded by the proposed algorithm and five different expert raters on 18 patients, we observe that our technique achieves overlap scores that are very similar to those obtained by pairwise expert rater comparison. A further comparison between the proposed method and a state-of-the-art Deep Learning segmentation on a separate set of 32 manually annotated subjects confirms the proposed method can achieve comparable results with very mild computational burden and in a completely training-free and unsupervised way. Conclusion: Our technique can be a computationally light and effective way to automatically delineate haematoma and oedema extent directly from MR images. Thus, with increasing use of MR images clinically after intracerebral haemorrhage this technique has the potential to inform clinical practice in the future.
... One of the most widely used statistical measures is the correlation coefficient. The choice of correlation and dissimilarity measures is essential in many areas of science including, but not limited to, clustering co-expressed genes, mediation and moderation analysis with structural equation modeling, time series analysis, pattern recognition, autonomous robots, structural engineering, image recognition, graph theoretical algorithms, spatiotemporal trajectory, artificial intelligence, machine learning techniques, classification, principal component analysis, discriminant analysis, and correlation graphs [1][2][3][4][5][6][7][8][9][10][11][12]. The need for robust techniques is of utmost significance when dealing with high dimensional biological noisy data. ...
Article
Full-text available
Background The most common measure of association between two continuous variables is the Pearson correlation (Maronna et al. in Safari an OMC. Robust statistics, 2019. https://login.proxy.bib.uottawa.ca/login?url=https://learning.oreilly.com/library/view/-/9781119214687/?ar&orpq&email=^u). When outliers are present, Pearson does not accurately measure association and robust measures are needed. This article introduces three new robust measures of correlation: Taba (T), TabWil (TW), and TabWil rank (TWR). The correlation estimators T and TW measure a linear association between two continuous or ordinal variables; whereas TWR measures a monotonic association. The robustness of these proposed measures in comparison with Pearson (P), Spearman (S), Quadrant (Q), Median (M), and Minimum Covariance Determinant (MCD) are examined through simulation. Taba distance is used to analyze genes, and statistical tests were used to identify those genes most significantly associated with Williams Syndrome (WS). Results Based on the root mean square error (RMSE) and bias, the three proposed correlation measures are highly competitive when compared to classical measures such as P and S as well as robust measures such as Q, M, and MCD. Our findings indicate TBL2 was the most significant gene among patients diagnosed with WS and had the most significant reduction in gene expression level when compared with control ( P value = 6.37E-05). Conclusions Overall, when the distribution is bivariate Log-Normal or bivariate Weibull, TWR performs best in terms of bias and T performs best with respect to RMSE. Under the Normal distribution, MCD performs well with respect to bias and RMSE; but TW, TWR, T, S, and P correlations were in close proximity. The identification of TBL2 may serve as a diagnostic tool for WS patients. A Taba R package has been developed and is available for use to perform all necessary computations for the proposed methods.
... However, they can fail in the face of large intensity inconsistencies [17,18], which can be caused, e.g., by non-homogeneous transmission or reception of the MR signal. To reduce the dependency on the image intensities, registration approaches based on aligning edges have been investigated, which include gradient magnitude correlation [19], Canny filters [20] and normalised gradients dot product [21,22]. ...
Conference Paper
Full-text available
In medical imaging it is common practice to acquire a wide range of modalities (MRI, CT, PET, etc.), to highlight different structures or pathologies. As patient movement between scans or scanning session is unavoidable, registration is often an essential step before any subsequent image analysis. In this paper, we introduce a cost function based on joint total variation for such multimodal image registration. This cost function has the advantage of enabling principled, groupwise alignment of multiple images, whilst being insensitive to strong intensity non-uniformities. We evaluate our algorithm on rigidly aligning both simulated and real 3D brain scans. This validation shows robustness to strong intensity non-uniformities and low registration errors for CT/PET to MRI alignment. Our implementation is publicly available at https://github.com/brudfors/coregistration-njtv.
... However, they can fail in the face of large intensity inconsistencies [17,18], which can be caused, e.g., by non-homogeneous transmission or reception of the MR signal. To reduce the dependency on the image intensities, registration approaches based on aligning edges have been investigated, which include gradient magnitude correlation [19], Canny filters [20] and normalised gradients dot product [21,22]. ...
Preprint
Full-text available
In medical imaging it is common practice to acquire a wide range of modalities (MRI, CT, PET, etc.), to highlight different structures or pathologies. As patient movement between scans or scanning session is unavoidable, registration is often an essential step before any subsequent image analysis. In this paper, we introduce a cost function based on joint total variation for such multimodal image registration. This cost function has the advantage of enabling principled, groupwise alignment of multiple images, whilst being insensitive to strong intensity non-uniformities. We evaluate our algorithm on rigidly aligning both simulated and real 3D brain scans. This validation shows robustness to strong intensity non-uniformities and low registration errors for CT/PET to MRI alignment. Our implementation is publicly available at https://github.com/brudfors/coregistration-njtv.
... To reduce the dependency on the image intensities, registration approaches based on aligning edges have been investigated, which include gradient magnitude correlation (Maintz et al., 1996), Canny filters (Orchard, 2007) and normalised gradients dot product (Haber and Modersitzki, 2006;Snape et al., 2016). To match surfaces, or segmentations, has also been used as a method for registering images (Pelizzari et al., 1989;Hemler et al., 1995;Greve and Fischl, 2009;Xiaohua et al., 2005;Aganj and Fischl, 2017). ...
Conference Paper
I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions.
... Their features are extracted from an image by convolving the image using a chosen filter, such as a Kernel filter or Gabor filter, or combining them to generate a feature map of the same input image [48]. Measuring the similarity can be done by applying a cosine similarity function [49]. ...
Article
Full-text available
The problem of forged images has become a global phenomenon that is spreading mainly through social media. New technologies have provided both the means and the support for this phenomenon, but they are also enabling a targeted response to overcome it. Deep convolution learning algorithms are one such solution. These have been shown to be highly effective in dealing with image forgery derived from generative adversarial networks (GANs). In this type of algorithm, the image is altered such that it appears identical to the original image and is nearly undetectable to the unaided human eye as a forgery. The present paper investigates copy-move forgery detection using a fusion processing model comprising a deep convolutional model and an adversarial model. Four datasets are used. Our results indicate a significantly high detection accuracy performance (~95%) exhibited by the deep learning CNN and discriminator forgery detectors. Consequently, an end-to-end trainable deep neural network approach to forgery detection appears to be the optimal strategy. The network is developed based on two-branch architecture and a fusion module. The two branches are used to localize and identify copy-move forgery regions through CNN and GAN.
... In order to eliminate outliers, MIR techniques has been developed. Consistency test 46 , gradient-based asymmetric multifeature MI 47 ; intensity transformation, joint saliency map (JSM) & normalized gradients 48 ; and graph-based multifeature MI 48 are the popular techniques for the rejection of outliers in medical images. Despite the availability of these techniques, further developments are needed to enhance the robustness of available similarity measures towards outliers. ...
Article
Full-text available
The objective of this paper is to provide a detailed overview on the classification and applications of medical image registration. Issues in medical image registration are also presented along with their promising solutions and research guidelines. In this review, general concepts, classification, applications and issues in medical image registration is presented and analyzed in a comprehensive manner. The methods used for analysis is unique from already published work because we have performed detailed investigation on the classification, applications and issues in medical image registration. The knowledge on the work that has been developed in the area is presented in a compact and systematic form. This work provides contribution to field of medical image registration by providing a useful platform for both researchers and clinicians in the field.
... Therefore, several approaches have been used for the rejection of outliers in medical image registration. The most prominent among them include consistency test 32 , intensity transformation 33 , gradient-based asymmetric multifeature MI 34 , graph-based multifeature MI 35 , joint saliency map (JSM) 31 and normalized gradients 36 .The rejection of outliers is a challenging task in medical image registration because a large number of outliers are present in the image guided surgery applications. Therefore, more efforts are required to improve the robustness of available similarity measures towards outliers. ...
Article
Full-text available
The continuous development and innovation in medical imaging techniques provide clinicians new ways for improved health care services. Despite improvement in health care services, several issues and challenges in medical image analysis are still present. Image registration is one of the most important tasks in medical image analysis and is the most critical step in several clinical applications. In this paper, medical image registration, which effectively integrate complementary and valuable information from multiple imaging resources and represent them in a single more informative image, is introduced. This paper covers the most prominent state-of-the-art issues and challenges in medical image registration and suggests some possible solutions. Moreover, the factors affecting the accuracy, reliability and efficiency of registration techniques are presented. An improved health care service is difficult to achieve until all the issues and challenges in medical image registration are identified and subsequently solved.
... We also show that the proposed measure is robust to the presence of pathologies such as tumours or lesions in the images being registered. A preliminary version of this work can be found in [172]. ...
Thesis
Full-text available
The automated analysis of medical images plays an increasingly significant part in many clinical applications. Image registration is an important and widely used technique in this context. Examples of its use include, but are not limited to: longitudinal studies, atlas construction, statistical analysis of populations and automatic or semi-automatic parcellation of structures. Although image registration has been subject of active research since the 1990s, it is a challenging topic with many issues that remain to be solved. This thesis seeks to address some of the open challenges of image registration by proposing fast and robust methods based on the widely utilised and well established registration framework of B-spline Free-Form Deformations (FFD). In this work, a statistical method has been incorporated into the FFD model, in order to obtain a fast learning-based method that produces results that are in accordance with the underlying variability of the population under study. Several comparisons between different statistical analysis methods that can be used in this context are performed. Secondly, a method to improve the convergence of the B-Spline FFD method by learning a gradient projection using principal component analysis and linear regression is proposed. Furthermore, a robust similarity measure is proposed that enables the registration of images affected by intensity inhomogeneities and images with pathologies, e.g. lesions and/or tumours. All the methods presented in this thesis have been extensively evaluated using both synthetic data and large datasets of real clinical data, such as Magnetic Resonance (MR) images of the brain and heart.
Article
Full-text available
Image registration is a commonly task in medical image analysis. Therefore, a significant number of algorithms have been developed to perform rigid and non-rigid image registration. Particularly, the free-form deformation algorithm is frequently used to carry out non-rigid registration task; however, it is a computationally very intensive algorithm. In this work, we describe an approach based on profiling data to identify potential parts of this algorithm for which parallel implementations can be developed. The proposed approach assesses the efficient of the algorithm by applying performance analysis techniques commonly available in traditional computer operating systems. Hence, this article provides guidelines to support researchers working on medical image processing and analysis to achieve real-time non-rigid image registration applications using common computing systems. According to our experimental findings, significant speedups can be accomplished by parallelizing sequential snippets, i.e., code regions that are executed more than once. For the selected costly functions previously identified in the studied free-form deformation algorithm, the developed parallelization decreased the runtime by up to seven times relatively to the related single thread based implementation. The implementations were developed based on the Open Multi-Processing application programming interface. In conclusion, this study confirms that based on the call graph visualization and detected performance bottlenecks, one can easily find and evaluate snippets which are potential optimization targets in addition to throughput in memory accesses.
Chapter
In medical imaging it is common practice to acquire a wide range of modalities (MRI, CT, PET, etc.), to highlight different structures or pathologies. As patient movement between scans or scanning session is unavoidable, registration is often an essential step before any subsequent image analysis. In this paper, we introduce a cost function based on joint total variation for such multimodal image registration. This cost function has the advantage of enabling principled, groupwise alignment of multiple images, whilst being insensitive to strong intensity non-uniformities. We evaluate our algorithm on rigidly aligning both simulated and real 3D brain scans. This validation shows robustness to strong intensity non-uniformities and low registration errors for CT/PET to MRI alignment. Our implementation is publicly available at https://github.com/brudfors/coregistration-njtv.
Article
Full-text available
Dense image alignment, when the displacement between the frames is large, can be a challenging task. This paper presents a novel dense image alignment algorithm, the Adaptive Forwards Additive Lucas-Kanade (AFA-LK) tracking algorithm, which considers the scale-space representation of the images, parametrized by a scale parameter, to estimate the geometric transformation between an input image and the corresponding template. The main result in this framework is the optimization of the scale parameter along with the transformation parameters, which permits to significantly increase the convergence domain of the proposed algorithm while keeping a high estimation precision. The performance of the proposed method was tested in various computer-based experiments, which reveal its interest in comparison with geometric as well as learning-based methods from the literature, both in terms of precision and convergence rate.
Thesis
Full-text available
The automated analysis of medical images plays an increasingly significant part in many clinical applications. Image registration is an important and widely used technique in this context. Examples of its use include, but are not limited to: longitudinal studies, atlas construction, statistical analysis of populations and automatic or semi-automatic parcellation of structures. Although image registration has been subject of active research since the 1990s, it is a challenging topic with many issues that remain to be solved. This thesis seeks to address some of the open challenges of image registration by proposing fast and robust methods based on the widely utilised and well established registration framework of B-spline Free-Form Deformations (FFD). In this work, a statistical method has been incorporated into the FFD model, in order to obtain a fast learning-based method that produces results that are in accordance with the underlying variability of the population under study. Several comparisons between different statistical analysis methods that can be used in this context are performed. Secondly, a method to improve the convergence of the B-Spline FFD method by learning a gradient projection using principal component analysis and linear regression is proposed. Furthermore, a robust similarity measure is proposed that enables the registration of images affected by intensity inhomogeneities and images with pathologies, e.g. lesions and/or tumours. All the methods presented in this thesis have been extensively evaluated using both synthetic data and large datasets of real clinical data, such as Magnetic Resonance (MR) images of the brain and heart.
Conference Paper
Full-text available
Low-rank image decomposition has the potential to address a broad range of challenges that routinely occur in clinical practice. Its novelty and utility in the context of atlas-based analysis stems from its ability to handle images containing large pathologies and large deforma- tions. Potential applications include atlas-based tissue segmentation and unbiased atlas building from data containing pathologies. In this paper we present atlas-based tissue segmentation of MRI from patients with large pathologies. Specifically, a healthy brain atlas is registered with the low-rank components from the input MRIs, the low-rank compo- nents are then re-computed based on those registrations, and the process is then iteratively repeated. Preliminary evaluations are conducted using the brain tumor segmentation challenge data (BRATS ’12).
Conference Paper
Full-text available
We present a super fast variational algorithm for the challenging problem of multimodal image registration. It is capable of registering full-body CT and PET images in about a second on a standard CPU with virtually no memory requirements. The algorithm is founded on a Gauss-Newton optimization scheme with specifically tailored, mathematically optimized computations for objective function and derivatives. It is fully parallelized and perfectly scalable, thus directly suitable for usage in many-core environments. The accuracy of our method was tested on 21 PET-CT scan pairs from clinical routine. The method was able to correct random distortions in the range from -10 cm to 10 cm translation and from -15° to 15° degree rotation to subvoxel accuracy. In addition, it exhibits excellent robustness to noise.
Article
Full-text available
In this paper, we address a complex image registration issue arising while the dependencies between intensities of images to be registered are not spatially homogeneous. Such a situation is frequently encountered in medical imaging when a pathology present in one of the images modifies locally intensity dependencies observed on normal tissues. Usual image registration models, which are based on a single global intensity similarity criterion, fail to register such images, as they are blind to local deviations of intensity dependencies. Such a limitation is also encountered in contrast-enhanced images where there exist multiple pixel classes having different properties of contrast agent absorption. In this paper, we propose a new model in which the similarity criterion is adapted locally to images by classification of image intensity dependencies. Defined in a Bayesian framework, the similarity criterion is a mixture of probability distributions describing dependencies on two classes. The model also includes a class map which locates pixels of the two classes and weighs the two mixture components. The registration problem is formulated both as an energy minimization problem and as a maximum a posteriori estimation problem. It is solved using a gradient descent algorithm. In the problem formulation and resolution, the image deformation and the class map are estimated simultaneously, leading to an original combination of registration and classification that we call image classifying registration. Whenever sufficient information about class location is available in applications, the registration can also be performed on its own by fixing a given class map. Finally, we illustrate the interest of our model on two real applications from medical imaging: template-based segmentation of contrast-enhanced images and lesion detection in mammograms. We also conduct an evaluation of our model on simulated medical data and show its ability to take into account spatial variations of intensity dependencies while keeping a good registration accuracy.
Conference Paper
Full-text available
We focus on the image registration problem. Mathematically, this problem consists of minimizing an energy which is composed of a regularization term and a similarity term. The similarity term, which depends on image intensities, has to be chosen according to the nature of image grey-level dependencies. Its adequacy always depends on the validity of some assumptions about these dependencies. But, in medical applications, there are many situations where these assumptions are not confirmed. In particular, intensity variations caused by observed pathologies may not be consistent with assumptions. Such variations may distort the registration constraints and cause registration errors. In order to cope with this problem, we propose a new approach which takes into account the possible inconsistencies in the computation of the registration constraints. This approach is described in two different points of view. First, we formulate a new minimization problem with an extra unknown which measures the degree of inconsistency on each pixel. Then, we show that this problem is equivalent to another one which can be related to the usual ones. We also outline several ways to generalize our approach and propose an algorithm to numerically solve these problems. Finally, we illustrate on synthetic data some characteristics of the algorithm when dealing with inconsistent image differences.
Conference Paper
Full-text available
Parametric models of shape and texture such as Ac- tive Appearance Models (AAMs) are diverse tools for de- formable object appearance modeling and have found im- portant applications in both image synthesis and analy- sis problems. Among the numerous algorithms that have been proposed for AAM fitting, those based on the inverse- compositional image alignment technique have recently re- ceived considerable attention due to their potential for high efficiency. However, existing fitting algorithms perform poorly when used in conjunction with models exhibiting significant appearance variation, such as AAMs trained on multiple-subject human face images. We introduce two enhancements to inverse-compositional AAM matching al- gorithms in order to overcome this limitation. First, we propose fitting algorithm adaptation, by means of (a) fit- ting matrix adjustment and (b) AAM mean template up- date. Second, we show how prior information can be in- corporated and constrain the AAM fitting process. The inverse-compositional nature of the algorithm allows effi- cient implementation of these enhancements. Both tech- niques substantially improve AAM fitting performance, as demonstrated with experiments on publicly available multi- person face datasets.
Conference Paper
Full-text available
We propose a correlation-based approach to parametric object alignment particularly suitable for face analysis applications which require efficiency and robustness against occlusions and illumination changes. Our algorithm registers two images by iteratively maximizing their correlation coefficient using gradient ascent. We compute this correlation coefficient from complex gradients which capture the orientation of image structures rather than pixel intensities. The maximization of this gradient correlation coefficient results in an algorithm which is as computationally efficient as ℓ2 norm-based algorithms, can be extended within the inverse compositional framework (without the need for Hessian re-computation) and is robust to outliers. To the best of our knowledge, no other algorithm has been proposed so far having all three features. We show the robustness of our algorithm for the problem of face alignment in the presence of occlusions and non-uniform illumination changes. The code that reproduces the results of our paper can be found at http://ibug.doc.ic.ac.uk/resources.
Conference Paper
Full-text available
Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system.
Article
Full-text available
Mutual information (MI) registration including spatial information has been shown to perform better than the traditional MI measures for certain nonrigid registration tasks. In this work, we first provide new insight to problems of the MI-based registration and propose to use the spatially encoded mutual information (SEMI) to tackle these problems. To encode spatial information, we propose a hierarchical weighting scheme to differentiate the contribution of sample points to a set of entropy measures, which are associated to spatial variable values. By using free-form deformations (FFDs) as the transformation model, we can first define the spatial variable using the set of FFD control points, and then propose a local ascent optimization scheme for nonrigid SEMI registration. The proposed SEMI registration can improve the registration accuracy in the nonrigid cases where the traditional MI is challenged due to intensity distortion, contrast enhancement, or different imaging modalities. It also has a similar computation complexity to the registration using traditional MI measures, improving up to two orders of magnitude of computation time compared to the traditional schemes. We validate our algorithms using phantom brain MRI, simulated dynamic contrast enhanced mangetic resonance imaging (MRI) of the liver, and in vivo cardiac MRI. The results show that the SEMI registration significantly outperforms the traditional MI registration.
Article
Full-text available
An important issue in computer-assisted surgery of the liver is a fast and reliable transfer of preoperative resection plans to the intraoperative situation. One problem is to match the planning data, derived from preoperative CT or MR images, with 3D ultrasound images of the liver, acquired during surgery. As the liver deforms significantly in the intraoperative situation non-rigid registration is necessary. This is a particularly challenging task because pre- and intraoperative image data stem from different modalities and ultrasound images are generally very noisy. One way to overcome these problems is to incorporate prior knowledge into the registration process. We propose a method of combining anatomical landmark information with a fast non-parametric intensity registration approach. Mathematically, this leads to a constrained optimization problem. As distance measure we use the normalized gradient field which allows for multimodal image registration. A qualitative and quantitative validation on clinical liver data sets of three different patients has been performed. We used the distance of dense corresponding points on vessel center lines for quantitative validation. The combined landmark and intensity approach improves the mean and percentage of point distances above 3 mm compared to rigid and thin-plate spline registration based only on landmarks. The proposed algorithm offers the possibility to incorporate additional a priori knowledge-in terms of few landmarks-provided by a human expert into a non-rigid registration process.
Article
Full-text available
In this work we propose the use of a modified version of the correlation coefficient as a performance criterion for the image alignment problem. The proposed modification has the desirable characteristic of being invariant with respect to photometric distortions. Since the resulting similarity measure is a nonlinear function of the warp parameters, we develop two iterative schemes for its maximization, one based on the forward additive approach and the second on the inverse compositional method. As it is customary in iterative optimization, in each iteration, the nonlinear objective function is approximated by an alternative expression for which the corresponding optimization is simple. In our case we propose an efficient approximation that leads to a closed-form solution (per iteration) which is of low computational complexity, the latter property being particularly strong in our inverse version. The proposed schemes are tested against the Forward Additive Lucas-Kanade and the Simultaneous Inverse Compositional (SIC) algorithm through simulations. Under noisy conditions and photometric distortions, our forward version achieves more accurate alignments and exhibits faster convergence whereas our inverse version has similar performance as the SIC algorithm but at a lower computational complexity.
Article
Full-text available
A novel approach to correcting for intensity nonuniformity in magnetic resonance (MR) data is described that achieves high performance without requiring a model of the tissue classes present. The method has the advantage that it can be applied at an early stage in an automated data analysis, before a tissue model is available. Described as nonparametric nonuniform intensity normalization (N3), the method is independent of pulse sequence and insensitive to pathological data that might otherwise violate model assumptions. To eliminate the dependence of the field estimate on anatomy, an iterative approach is employed to estimate both the multiplicative bias field and the distribution of the true tissue intensities. The performance of this method is evaluated using both real and simulated MR data.
Article
Full-text available
We present a gradient-based method for rigid registration of a patient preoperative computed tomography (CT) to its intraoperative situation with a few fluoroscopic X-ray images obtained with a tracked C-arm. The method is noninvasive, anatomy-based, requires simple user interaction, and includes validation. It is generic and easily customizable for a variety of routine clinical uses in orthopaedic surgery. Gradient-based registration consists of three steps: 1) initial pose estimation; 2) coarse geometry-based registration on bone contours, and; 3) fine gradient projection registration (GPR) on edge pixels. It optimizes speed, accuracy, and robustness. Its novelty resides in using volume gradients to eliminate outliers and foreign objects in the fluoroscopic X-ray images, in speeding up computation, and in achieving higher accuracy. It overcomes the drawbacks of intensity-based methods, which are slow and have a limited convergence range, and of geometry-based methods, which depend on the image segmentation quality. Our simulated, in vitro, and cadaver experiments on a human pelvis CT, dry vertebra, dry femur, fresh lamb hip, and human pelvis under realistic conditions show a mean 0.5-1.7 mm (0.5-2.6 mm maximum) target registration accuracy.
Article
Full-text available
We propose conditional mutual information (cMI) as a new similarity measure for nonrigid image registration. We start from a 3D joint histogram incorporating, besides the reference and floating intensity dimensions, also a spatial dimension expressing the location of the joint intensity pair in the reference image. cMI is calculated as the expectation value of the conditional mutual information between the reference and floating intensities given the spatial distribution. Validation experiments were performed comparing cMI and global MI on artificial CT/MR registrations and registrations complicated with a strong bias field; both a Parzen window and generalised partial volume kernel were used for histogram construction. In both experiments, cMI significantly outperforms global MI. Moreover, cMI is compared to global MI for the registration of three patient CT/MR datasets, using overlap and centroid distance as validation measure. The best results are obtained using cMI.
Article
Full-text available
The Open Access Series of Imaging Studies is a series of magnetic resonance imaging data sets that is publicly available for study and analysis. The initial data set consists of a cross-sectional collection of 416 subjects aged 18 to 96 years. One hundred of the included subjects older than 60 years have been clinically diagnosed with very mild to moderate Alzheimer's disease. The subjects are all right-handed and include both men and women. For each subject, three or four individual T1-weighted magnetic resonance imaging scans obtained in single imaging sessions are included. Multiple within-session acquisitions provide extremely high contrast-to-noise ratio, making the data amenable to a wide range of analytic approaches including automated computational analysis. Additionally, a reliability data set is included containing 20 subjects without dementia imaged on a subsequent visit within 90 days of their initial session. Automated calculation of whole-brain volume and estimated total intracranial volume are presented to demonstrate use of the data for measuring differences associated with normal aging and Alzheimer's disease.
Article
Full-text available
Mutual Information (MI) is popular for registration via function optimisation. This work proposes an inverse compositional formulation of MI for Levenberg-Marquardt optimisation. This yields a constant Hessian, which may be pre-computed. Speed improvements of 15% were obtained, with convergence accuracies similar to those of the standard formulation.
Article
Full-text available
This paper is motivated by the analysis of serial structural magnetic resonance imaging (MRI) data of the brain to map patterns of local tissue volume loss or gain over time, using registration-based deformation tensor morphometry. Specifically, we address the important confound of local tissue contrast changes which can be induced by neurodegenerative or neurodevelopmental processes. These not only modify apparent tissue volume, but also modify tissue integrity and its resulting MRI contrast parameters. In order to address this confound we derive an approach to the voxel-wise optimization of regional mutual information (RMI) and use this to drive a viscous fluid deformation model between images in a symmetric registration process. A quantitative evaluation of the method when compared to earlier approaches is included using both synthetic data and clinical imaging data. Results show a significant reduction in errors when tissue contrast changes locally between acquisitions. Finally, examples of applying the technique to map different patterns of atrophy rate in different neurodegenerative conditions is included.
Article
Full-text available
In this paper the authors present a new approach for the nonrigid registration of contrast-enhanced breast MRI. A hierarchical transformation model of the motion of the breast has been developed. The global motion of the breast is modeled by an affine transformation while the local breast motion is described by a free-form deformation (FFD) based on B-splines. Normalized mutual information is used as a voxel-based similarity measure which is insensitive to intensity changes as a result of the contrast enhancement. Registration is achieved by minimizing a cost function, which represents a combination of the cost associated with the smoothness of the transformation and the cost associated with the image similarity. The algorithm has been applied to the fully automated registration of three-dimensional (3-D) breast MRI in volunteers and patients. In particular, the authors have compared the results of the proposed nonrigid registration algorithm to those obtained using rigid and affine registration techniques. The results clearly indicate that the nonrigid registration algorithm is much better able to recover the motion and deformation of the breast than rigid or affine registration algorithms.
Article
Full-text available
As an object moves through the field of view of a camera, the images of the object may change dramatically. This is not simply due to the translation of the object across the image plane; complications arise due to the fact that the object undergoes changes in pose relative to the viewing camera, in illumination relative to light sources, and may even become partially or fully occluded. We develop an efficient general framework for object tracking, which addresses each of these complications. We first develop a computationally efficient method for handling the geometric distortions produced by changes in pose. We then combine geometry and illumination into an algorithm that tracks large image regions using no more computation than would be required to track with no accommodation for illumination changes. Finally, we augment these methods with techniques from robust statistics and treat occluded regions on the object as statistical outliers. Experimental results are given to demonstrate the effectiveness of our methods
Article
Full-text available
Active Appearance Models (AAMs) and the closely related concepts of Morphable Models and Active Blobs are generative models of a certain visual phenomenon. Although linear in both shape and appearance, overall, AAMs are nonlinear parametric models in terms of the pixel intensities. Fitting an AAM to an image consists of minimising the error between the input image and the closest model instance; i.e. solving a nonlinear optimisation problem. We propose an efficient fitting algorithm for AAMs based on the inverse compositional image alignment algorithm. We show that the effects of appearance variation during fitting can be precomputed ("projected out") using this algorithm and how it can be extended to include a global shape normalising warp, typically a 2D similarity transformation. We evaluate our algorithm to determine which of its novel aspects improve AAM fitting performance.
Article
Full-text available
There are two major formulations of image alignment using gradient descent. The first estimates an additive increment to the parameters (the additive approach), the second an incremental warp (the compositional approach). We first prove that these two formulations are equivalent. A very efficient algorithm was recently proposed by Hager and Belhumeur using the additive approach that unfortunately can only be applied to a very restricted class of warps. We show that using the compositional approach an equally efficient algorithm (the inverse compositional algorithm) can be derived that can be applied to any set of warps which form a group. While most warps used in computer vision form groups, there are a certain warps that do not. Perhaps most notable is the set of piecewise affine warps used in Flexible Appearance Models (FAMs). We end this paper by extending the inverse compositional algorithm to apply to FAMs. 1
Article
Full-text available
This paper describes a new approach for tracking rigid and articulated objects using a view-based representation. The approach builds on and extends work on eigenspace representations, robust estimation techniques, and parameterized optical flow estimation. First, we note that the least-squaresimage reconstruction of standard eigenspacetechniqueshas a number of problems and we reformulate the reconstruction problem as one of robust estimation. Second we define a "subspace constancy assumption" that allows us to exploit techniques for parameterized optical flow estimation to simultaneously solve for the view of an object and the affine transformation between the eigenspace and the image. To account for large affine transformations between the eigenspace and the image we define an EigenPyramid representation and a coarse-to-fine matching strategy. Finally, we use these techniques to track objects over long image sequences in which the objects simultaneously undergo both affine image motions and changes of view. In particular we use this ``EigenTracking'' technique to track and recognize the gestures of a moving hand.
Article
Since the Lucas-Kanade algorithm was proposed in 1981 image alignment has become one of the most widely used techniques in computer vision. Applications range from optical flow and tracking to layered motion, mosaic construction, and face coding. Numerous algorithms have been proposed and a wide variety of extensions have been made to the original formulation. We present an overview of image alignment, describing most of the algorithms and their extensions in a consistent framework. We concentrate on the inverse compositional algorithm, an efficient algorithm that we recently proposed. We examine which of the extensions to Lucas-Kanade can be used with the inverse compositional algorithm without any significant loss of efficiency, and which cannot. In this paper, Part 1 in a series of papers, we cover the quantity approximated, the warp update rule, and the gradient descent approximation. In future papers, we will cover the choice of the error function, how to allow linear appearance variation, and how to impose priors on the parameters.
Conference Paper
A particular problem in image registration arises for multimodal images taken from different imaging devices and/or modalities. Starting in 1995, mutual information has shown to be a very successful distance measure for multi-modal image registration. However, mutual information has also a number of well-known drawbacks. Its main disadvantage is that it is known to be highly non-convex and has typically many local maxima. This observation motivate us to seek a different image similarity measure which is better suited for optimization but as well capable to handle multi-modal images. In this work we investigate an alternative distance measure which is based on normalized gradients and compare its performance to Mutual Information. We call the new distance measure Normalized Gradient Fields (NGF).
Conference Paper
We propose a correlation-based approach to parametric object alignment particularly suitable for face analysis applications which require efficiency and robustness against occlusions and illumination changes. Our algorithm registers two images by iteratively maximizing their correlation coefficient using gradient ascent. We compute this correlation coefficient from complex gradients which capture the orientation of image structures rather than pixel intensities. The maximization of this gradient correlation coefficient results in an algorithm which is as computationally efficient as ℓ2 norm-based algorithms, can be extended within the inverse compositional framework (without the need for Hessian recomputation) and is robust to outliers. To the best of our knowledge, no other algorithm has been proposed so far having all three features. We show the robustness of our algorithm for the problem of face alignment in the presence of occlusions and non-uniform illumination changes. The code that reproduces the results of our paper can be found at http://ibug.doc.ic.ac.uk/resources.
Article
A novel approach to correcting for intensity nonuniformity in magnetic resonance (MR) data is described that achieves high performance without requiring a model of the tissue classes present. The method has the advantage that it can be applied at an early stage in an automated data analysis, before a tissue model is available. Described as nonparametric nonuniform intensity normalization (N3), the method is independent of pulse sequence and insensitive to pathological data that might otherwise violate model assumptions. To eliminate the dependence of the field estimate on anatomy, an iterative approach is employed to estimate both the multiplicative bias field and the distribution of the true tissue intensities. The performance of this method is evaluated using both real and simulated MR data.
Article
This paper is concerned with the development of entropy-based registration criteria for automated 3D multi-modality medical image alignment. In this application where misalignment can be large with respect to the imaged field of view, invariance to overlap statistics is an important consideration. Current entropy measures are reviewed and a normalised measure is proposed which is simply the ratio of the sum of the marginal entropies and the joint entropy. The effect of changing overlap on current entropy measures and this normalised measure are compared using a simple image model and experiments on clinical image data. Results indicate that the normalised entropy measure provides significantly improved behaviour over a range of imaged fields of view.
Article
In this work we present a novel approach for elastic image registration of multi-phase contrast enhanced CT images of liver. A problem in registration of multiphase CT is that the images contain similar but complementary structures. In our application each image shows a different part of the vessel system, e.g., portal/hepatic venous/arterial, or biliary vessels. Portal, arterial and biliary vessels run in parallel and abut on each other forming the so called portal triad, while hepatic veins run independent. Naive registration will tend to align complementary vessel. Our new approach is based on minimizing a cost function consisting of a distance measure and a regularizer. For the distance we use the recently proposed normalized gradient field measure that focuses on the alignment of edges. For the regularizer we use the linear elastic potential. The key feature of our approach is an additional penalty term using segmentations of the different vessel systems in the images to avoid overlaps of complementary structures. We successfully demonstrate our new method by real data examples.
Article
Image alignment is one of the most widely used techniques in computer vision. Applications range from optical flow, tracking and layered motion, to mosaic construc- tion, medical image registration, and face model fitting. The original image alignment algorithm was the Lucas-Kanade algorithm. Since then, numerous extensions have been made to it. In particular, Baker and Matthews recently proposed the inverse com- positional algorithm, an efficient algorithm applicable to most 2D image alignment problems. In this report, we investigate whether the 2D inverse compositional algo- rithm can be generalized to 2.5D and 3D. By 3D we mean volumetric data consisting of a dense 3D array of voxels. By 2.5D we mean a surface in 3D represented by a collection of 3D surface points. We show that the inverse compositional algorithm is easily generalized to 3D. On the other hand, while algebraically it appears as though the 2.5D case may be treated similarly, doing so violates one of the assumptions in the proof of equivalence of the two algorithms.
Article
In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs).
In this paper we propose a framework for gradient descent image alignment in the Fourier domain. Specifically, we propose an extension to the classical Lucas & Kanade (LK) algorithm where we represent the source and template image's intensity pixels in the complex 2D Fourier domain rather than in the 2D spatial domain. We refer to this approach as the Fourier LK (FLK) algorithm. The FLK formulation is especially advantageous, over traditional LK, when it comes to pre-processing the source and template images with a bank of filters (e.g., Gabor filters) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. We demonstrate robust image matching performance on a variety of objects in the presence of substantial illumination differences with exactly the same computational overhead as that of traditional inverse compositional LK during fitting.
Article
A variant of the popular nonparametric nonuniform intensity normalization (N3) algorithm is proposed for bias field correction. Given the superb performance of N3 and its public availability, it has been the subject of several evaluation studies. These studies have demonstrated the importance of certain parameters associated with the B -spline least-squares fitting. We propose the substitution of a recently developed fast and robust B-spline approximation routine and a modified hierarchical optimization scheme for improved bias field correction over the original N3 algorithm. Similar to the N3 algorithm, we also make the source code, testing, and technical documentation of our contribution, which we denote as ??N4ITK,?? available to the public through the Insight Toolkit of the National Institutes of Health. Performance assessment is demonstrated using simulated data from the publicly available Brainweb database, hyperpolarized <sup>3</sup>He lung image data, and 9.4T postmortem hippocampus data.
Conference Paper
This paper introduces two important issues of image regis- tration. At first we want to recall the very general definition of mutual information that allows the choice of various feature spaces to perform image registration. Second we discuss the problem of finding the global maximum in an arbitrary feature space. We used a very general parallel, distributed memory, genetic optimization which turned out to be very robust. We restrict the examples to the context of multi-modal medical image registration but we want to point out that the approach is very general and therefore applicable to a wide range of other applications. The registration algorithm was analysed on a LINUX cluster.
Article
Flow and transport phenomena occurring within serpentine microchannels are analyzed for both two- and three-dimensional curvilinear configurations A variational method for nonrigid registration of multimodal image data is presented. A suitable deformation will be determined via the minimization of a morphological, i.e., contrast invariant, matching functional along with an appropriate regularization energy. The aim is to correlate the morphologies of a template and a reference image under the deformation. Mathematically, the morphology of images can be described by the entity of level sets of the image and hence by its Gauss map. A class of morphological matching functionals is presented which measure the defect of the template Gauss map in the deformed state with respect to the deformed Gauss map of the reference image. The problem is regularized by considering a nonlinear elastic regularization energy. Existence of homeomorphic , minimizing deformation is proved under assumptions on the class of admissible deformations. With respect to actual medical applications, suitable generalizations of the matching energies and the boundary conditions are presented. Concerning the robust implementation of the approach, the problem is embedded in a multiscale context. A discretization based on multilinear finite elements is discussed, and the first numerical results are presented.
Article
The quantitative validation of reconstruction algorithms requires reliable data. Rasterized simulations are popular but they are tainted by an aliasing component that impacts the assessment of the performance of reconstruction. We introduce analytical simulation tools that are suited to parallel magnetic resonance imaging and allow one to build realistic phantoms. The proposed phantoms are composed of ellipses and regions with piecewise-polynomial boundaries, including spline contours, Bézier contours, and polygons. In addition, they take the channel sensitivity into account, for which we investigate two possible models. Our analytical formulations provide well-defined data in both the spatial and k-space domains. Our main contribution is the closed-form determination of the Fourier transforms that are involved. Experiments validate the proposed implementation. In a typical parallel magnetic resonance imaging reconstruction experiment, we quantify the bias in the overly optimistic results obtained with rasterized simulations-the inverse-crime situation. We provide a package that implements the different simulations and provide tools to guide the design of realistic phantoms.
Article
A general-purpose deformable registration algorithm referred to as "DRAMMS" is presented in this paper. DRAMMS bridges the gap between the traditional voxel-wise methods and landmark/feature-based methods with primarily two contributions. First, DRAMMS renders each voxel relatively distinctively identifiable by a rich set of attributes, therefore largely reducing matching ambiguities. In particular, a set of multi-scale and multi-orientation Gabor attributes are extracted and the optimal components are selected, so that they form a highly distinctive morphological signature reflecting the anatomical and geometric context around each voxel. Moreover, the way in which the optimal Gabor attributes are constructed is independent of the underlying image modalities or contents, which renders DRAMMS generally applicable to diverse registration tasks. A second contribution of DRAMMS is that it modulates the registration by assigning higher weights to those voxels having higher ability to establish unique (hence reliable) correspondences across images, therefore reducing the negative impact of those regions that are less capable of finding correspondences (such as outlier regions). A continuously-valued weighting function named "mutual-saliency" is developed to reflect the matching uniqueness between a pair of voxels implied by the tentative transformation. As a result, voxels do not contribute equally as in most voxel-wise methods, nor in isolation as in landmark/feature-based methods. Instead, they contribute according to the continuously-valued mutual-saliency map, which dynamically evolves during the registration process. Experiments in simulated images, inter-subject images, single-/multi-modality images, from brain, heart, and prostate have demonstrated the general applicability and the accuracy of DRAMMS.
Article
We present a robust FFT-based approach to scale-invariant image registration. Our method relies on FFT-based correlation twice: once in the log-polar Fourier domain to estimate the scaling and rotation and once in the spatial domain to recover the residual translation. Previous methods based on the same principles are not robust. To equip our scheme with robustness and accuracy, we introduce modifications which tailor the method to the nature of images. First, we derive efficient log-polar Fourier representations by replacing image functions with complex gray-level edge maps. We show that this representation both captures the structure of salient image features and circumvents problems related to the low-pass nature of images, interpolation errors, border effects, and aliasing. Second, to recover the unknown parameters, we introduce the normalized gradient correlation. We show that, using image gradients to perform correlation, the errors induced by outliers are mapped to a uniform distribution for which our normalized gradient correlation features robust performance. Exhaustive experimentation with real images showed that, unlike any other Fourier-based correlation techniques, the proposed method was able to estimate translations, arbitrary rotations, and scale factors up to 6.
Article
A deformable registration method is proposed for registering a normal brain atlas with images of brain tumor patients. The registration is facilitated by first simulating the tumor mass effect in the normal atlas in order to create an atlas image that is as similar as possible to the patient's image. An optimization framework is used to optimize the location of tumor seed as well as other parameters of the tumor growth model, based on the pattern of deformation around the tumor region. In particular, the optimization is implemented in a multiresolution and hierarchical scheme, and it is accelerated by using a principal component analysis (PCA)-based model of tumor growth and mass effect, trained on a computationally more expensive biomechanical model. Validation on simulated and real images shows that the proposed registration framework, referred to as ORBIT (optimization of tumor parameters and registration of brain images with tumors), outperforms other available registration methods particularly for the regions close to the tumor, and it has the potential to assist in constructing statistical atlases from tumor-diseased brain images.
Article
The National Library of Medicine's Visible Human Male data set consists of digital magnetic resonance (MR), computed tomography (CT), and anatomic images derived from a single male cadaver. The data set is 15 gigabytes in size and is available from the National Library of Medicine under a no-cost license agreement. The history of the Visible Human Male cadaver and the methods and technology to produce the data set are described.
Article
Mutual information has developed into an accurate measure for rigid and affine monomodality and multimodality image registration. The robustness of the measure is questionable, however. A possible reason for this is the absence of spatial information in the measure. The present paper proposes to include spatial information by combining mutual information with a term based on the image gradient of the images to be registered. The gradient term not only seeks to align locations of high gradient magnitude, but also aims for a similar orientation of the gradients at these locations. Results of combining both standard mutual information as well as a normalized measure are presented for rigid registration of three-dimensional clinical images [magnetic resonance (MR), computed tomography (CT), and positron emission tomography (PET)]. The results indicate that the combined measures yield a better registration function does mutual information or normalized mutual information per se. The registration functions are less sensitive to low sampling resolution, do not contain incorrect global maxima that are sometimes found in the mutual information function, and interpolation-induced local minima can be reduced. These characteristics yield the promise of more robust registration measures. The accuracy of the combined measures is similar to that of mutual information-based methods.
Article
Medical imaging data sets are often corrupted by multiplicative inhomogeneities, often referred to as nonuniformities or intensity variations, that hamper the use of quantitative analyses. The authors describe an automatic technique that not only improves the worst situations, such as those encountered with magnetic resonance imaging (MRI) surface coils, but also corrects typical inhomogeneities encountered in routine volume data sets, such as MRI head scans, without generating additional artifact. Because the technique uses only the patient data set, the technique can be applied retrospectively to all data sets, and corrects both patient independent effects, such as rf coil design, and patient dependent effects, such as attenuation of overlying tissue experienced both in high field MRI and X-ray computed tomography (CT). The authors show results for several MRI imaging situations including thorax, head, and breast. Following such corrections, region of interest analyses, volume histograms, and thresholding techniques are more meaningful. The value of such correction algorithms may increase dramatically with increased use of high field strength magnets and associated patient-dependent rf attenuation in overlying tissues.
There are two major formulations of image alignment using gradient descent. The first estimates an additive increment to the parameters (the additive approach), the second an incremental warp (the compositional approach). We first prove that these two formulations are equivalent. A very efficient algorithm was proposed by Hager and Belhumeur (1998) using the additive approach that unfortunately can only be applied to a very restricted class of warps. We show that using the compositional approach an equally efficient algorithm (the inverse compositional algorithm) can be derived that can be applied to any set of warps which form a group. While most warps used in computer vision form groups, there are a certain warps that do not. Perhaps most notable is the set of piecewise affine warps used in flexible appearance models (FAMs). We end this paper by extending the inverse compositional algorithm to apply to FAMs.
Article
An overview is presented of the medical image processing literature on mutual-information-based registration. The aim of the survey is threefold: an introduction for those new to the field, an overview for those working in the field, and a reference for those searching for literature on a specific application. Methods are classified according to the different aspects of mutual-information-based registration. The main division is in aspects of the methodology and of the application. The part on methodology describes choices made on facets such as preprocessing of images, gray value interpolation, optimization, adaptations to the mutual information measure, and different types of geometrical transformations. The part on applications is a reference of the literature available on different modalities, on interpatient registration and on different anatomical objects. Comparison studies including mutual information are also considered. The paper starts with a description of entropy and mutual information and it closes with a discussion on past achievements and some future challenges.
Article
Since the Lucas-Kanade algorithm was proposed in 1981 image alignment has become one of the most widely used techniques in computer vision. Applications range from optical flow, tracking, and layered motion, to mosaic construction, medical image registration, and face coding. Numerous algorithms have been proposed and a variety of extensions have been made to the original formulation. We present an overview of image alignment, describing most of the algorithms in a consistent framework. We concentrate on the inverse compositional algorithm, an efficient algorithm that we recently proposed. We examine which of the extensions to the Lucas-Kanade algorithm can be used with the inverse compositional algorithm without any significant loss of efficiency, and which cannot. In this paper, the fourth and final part in the series, we cover the addition of priors on the parameters. We first consider the addition of priors on the warp parameters. We show that priors can be added with minimal extra cost to all of the algorithms in Parts 1--3. Next we consider the addition of priors on both the warp and appearance parameters. Image alignment with appearance variation was covered in Part 3. For each algorithm in Part 3, we describe whether priors can be placed on the appearance parameters or not, and if so what the cost is.
  • S Lucey
  • R Navarathna
  • A B Ashraf
  • S Sridharan
  • Fourier Lucas-Kanade Algorithm
S. Lucey, R. Navarathna, A. B. Ashraf, S. Sridharan, Fourier Lucas-Kanade Algorithm, IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (6) (2013) 1383-1396.
The visible Human Male: A
  • V Spitzer
  • M J Ackerman
  • A L Scherzinger
  • D Whitlock
V. Spitzer, M. J. Ackerman, A. L. Scherzinger, D. Whitlock, The visible Human Male: A Technical Report, Journal of the American Medical Informatics Association 3 (2) (1996) 118–130.
  • S Baker
  • R Gross
  • T Ishikawa
  • I Matthews
S. Baker, R. Gross, T. Ishikawa, I. Matthews, Lucas-Kanade 20 Years On: A Unifying Framework: Part 2, Tech. Rep., 2003.
  • N Tustison
  • B Avants
  • P Cook
  • Y Zheng
  • A Egan
  • P Yushkevich
  • J Gee
N. Tustison, B. Avants, P. Cook, Y. Zheng, A. Egan, P. Yushkevich, J. Gee, N4ITK: improved N3 bias correction, IEEE Transactions on Medical Imaging 29 (6) (2010) 1310-20.