A novel registration method for retinal images based on local features
Institute of Automation, Chinese Academy of Science, Beijing, China.Conference proceedings: ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference 02/2008; 2008:2242-5. DOI: 10.1109/IEMBS.2008.4649642
Sometimes it is very hard to automatically detect the bifurcations of vascular network in retinal images so that the general feature based registration methods will fail to register two images. In order to solve this problem, we developed a novel local feature based retinal image registration method. We first detect the corner points instead of bifurcations since corner points are sufficient and uniformly distributed in the overlaps. Second, a novel highly distinctive local feature is extracted around each corner point. These local features are invariant to rotation and contrast, and partially invariant to scaling. Third, a bilateral matching technique is applied to identify the corresponding features between two images. Finally a second order polynomial transformation is used to register two images. Experimental results show that our method is very robust and compute efficient to register retinal images even of very low quality.
Get notified about updates to this publicationFollow publication
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
[Show abstract] [Hide abstract]
- "The registration techniques have been developed in the past decades for increasing the camera's field of view and acquiring adequate details by stitching together more images. The basis for this approach was the alignment of images set of thin serial sections using image registration (Huang and Cooper, 2006; Chen et al., 2008; Brown and Lowe, 2007; Ma et al., 2015b). The examples of SRBSDV (Southern rice black-streaked dwarf virus)virus images are shown in Fig. 1. "
ABSTRACT: The electron microscopy is one of the major means to observe the virus. The view of virus microscope images is limited by making specimen and the size of the camera's view field. To solve this problem, the virus sample is produced into multi-slice for information fusion and image registration techniques are applied to obtain large field and whole sections. Image registration techniques have been developed in the past decades for increasing the camera's field of view. Nevertheless, these approaches typically work in batch mode and rely on motorized microscopes. Alternatively, the methods are conceived just to provide visually pleasant registration for high overlap ratio image sequence. This work presents a method for virus microscope image registration acquired with detailed visual information and subpixel accuracy, even when overlap ratio of image sequence is 10% or less. The method proposed focus on the correspondence set and interimage transformation. A mismatch removal strategy is proposed by the spatial consistency and the components of keypoint to enrich the correspondence set. And the translation model parameter as well as tonal inhomogeneities is corrected by the hierarchical estimation and model select. In the experiments performed, we tested different registration approaches and virus images, confirming that the translation model is not always stationary, despite the fact that the images of the sample come from the same sequence. The mismatch removal strategy makes building registration of virus microscope images at subpixel accuracy easier and optional parameters for building registration according to the hierarchical estimation and model select strategies make the proposed method high precision and reliable for low overlap ratio image sequence.
[Show abstract] [Hide abstract]
- "Nonetheless, the main deficiency is its low distinctiveness due to the reduced dimension of SIFT. In order to achieve higher distinctiveness , the partial intensity invariant feature descriptor (PIIFD)  is introduced. Similar to SIFT constituting of a 128-dimensional vector and having some common characteristics , PIIFD combines constrained gradient orientations between 0 to í µí¼ linearly, and performs a rotation to address the multimodal problem of gradient orientations of corresponding points in opposite directions. "
ABSTRACT: Existing feature descriptor-based methods on retinal image registration are mainly based on scale-invariant feature transform (SIFT) or partial intensity invariant feature descriptor (PIIFD). While these descriptors are often being exploited, they do not work very well upon unhealthy multimodal images with severe diseases. Additionally, the descriptors demand high dimensionality to adequately represent the features of interest. The higher the dimensional-ity, the greater the consumption of resources (e.g. memory space). To this end, this paper introduces a novel registration algorithm coined low-dimensional step pattern analysis (LoSPA), tailored to achieve low dimensionality while providing sufficient distinctiveness to effectively align unhealthy multimodal image pairs. The algorithm locates hypotheses of robust corner features based on connecting edges from the edge maps, mainly formed by vascular junctions. This method is insensitive to intensity changes, and produces uniformly distributed features and high repeata-bility across the image domain. The algorithm continues with describing the corner features in a rotation invariant manner using step patterns. These customized step patterns are robust to non-linear intensity changes, which are well-suited for multimodal retinal image registration. Apart from its low dimensionality, the LoSPA algorithm achieves about twofold higher success rate in multimodal registration on the dataset of severe retinal diseases when compared to the top score among state-of-the-art algorithms.
[Show abstract] [Hide abstract]
- "Feature based techniques    involve the detection of landmark points in retinal vascular network and the extraction of features representing the landmark points, followed by the application of a match metric to identify the correspondences between two images. Most of the feature based methods use bifurcation points as landmarks since they are a remarkable indicator of vasculature, but some of them use also other control points such as Harris corners . "
ABSTRACT: Accurate retinal image registration is essential to track the evolution of eye-related diseases. We propose a semiautomatic method based on features relying upon retinal graphs for temporal registration of retinal images. The features represent straight lines connecting vascular landmarks on the retina vascular tree: bifurcations, branchings, crossings, end points. In the built retinal graph, one straight line between two vascular landmarks indicates that they are connected by a vascular segment in the original retinal image. The locations of the landmarks are manually extracted to avoid the information loss due to errors in a retinal vessels segmentation algorithms. A straight line model is designed to compute a similarity measure to quantify the line matching between images. From the set of matching lines, corresponding points are extracted and a global transformation is computed. The performance of the registration method is evaluated in the absence of ground truth using the cumulative inverse consistency error (CICE).