[show abstract][hide abstract] ABSTRACT: Mutual information (MI) was introduced for use in multimodal image registration over a decade ago [1,2,3,4]. The MI between
two images is based on their marginal and joint/conditional entropies. The most common versions of entropy used to compute
MI are the Shannon and differential entropies; however, many other definitions of entropy have been proposed as competitors.
In this article, we show how to construct normalized versions of MI using any of these definitions of entropy. The resulting
similarity measures are analogous to normalized mutual information (NMI), entropy correlation coefficient (ECC), and symmetric
uncertainty (SU), which have all been shown to be superior to MI in a variety of situations. We use publicly available CT,
PET, and MR brain images with known ground truth transformations to evaluate the performance of the normalized measures for
rigid multimodal registration. Results show that for a number of different definitions of entropy, the proposed normalized
versions of mutual information provide a statistically significant improvement in target registration error (TRE) over the
Biomedical Image Registration, 4th International Workshop, WBIR 2010, Lübeck, Germany, July 11-13, 2010. Proceedings; 01/2010
[show abstract][hide abstract] ABSTRACT: Multi-modal image registration is a challenging problem in medical imaging. The goal is to align anatomically identical structures; however, their appearance in images acquired with different imaging devices, such as CT or MR, may be very different. Registration algorithms generally deform one image, the floating image, such that it matches with a second, the reference image, by maximizing some similarity score between the deformed and the reference image. Instead of using a universal, but a priori fixed similarity criterion such as mutual information, we propose learning a similarity measure in a discriminative manner such that the reference and correctly deformed floating images receive high similarity scores. To this end, we develop an algorithm derived from max-margin structured output learning, and employ the learned similarity measure within a standard rigid registration algorithm. Compared to other approaches, our method adapts to the specific registration problem at hand and exploits correlations between neighboring pixels in the reference and the floating image. Empirical evaluation on CT-MR/PET-MR rigid registration tasks demonstrates that our approach yields robust performance and outperforms the state of the art methods for multi-modal medical image registration.
Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition 01/2009;
[show abstract][hide abstract] ABSTRACT: In Studholme et al. introduced normalized mutual information (NMI) as an overlap invariant generalization of mutual information (MI). Even though Studholme showed how NMI could be used effectively in multimodal medical image alignment, the overlap invariance was only established empirically on a few simple examples. In this paper, we illustrate a simple example in which NMI fails to be invariant to changes in overlap size, as do other standard similarity measures including MI, cross correlation (CCorr), correlation coefficient (CCoeff), correlation ratio (CR), and entropy correlation coefficient (ECC). We then derive modified forms of all of these similarity measures that are proven to be invariant to changes in overlap size. This is done by making certain assumptions about background statistics. Experiments on multimodal rigid registration of brain images show that 1) most of the modified similarity measures outperform their standard forms, and 2) the modified version of MI exhibits superior performance over any of the other similarity measures for both CT/MR and PET/MR registration.
Computer Vision and Pattern Recognition Workshops, 2008. CVPRW '08. IEEE Computer Society Conference on; 07/2008
[show abstract][hide abstract] ABSTRACT: We present a methodology for incorporating prior knowledge on class probabilities into the registration process. By using knowledge from the imaging modality, pre-segmentations, and/or probabilistic atlases, we construct vectors of class probabilities for each image voxel. By defining new image similarity measures for distribution-valued images, we show how the class probability images can be nonrigidly registered in a variational framework. An experiment on nonrigid registration of MR and CT full-body scans illustrates that the proposed technique outperforms standard mutual information (MI) and normalized mutual information (NMI) based registration techniques when measured in terms of target registration error (TRE) of manually labeled fiducials.
Proceedings of the MICCAI 2009 Workshop on Probabilistic Models for Medical Image Analysis (PMMIA 2009), 220-231 (2009).