Automatic Localization of Anatomical Point Landmarks for Brain Image Processing Algorithms

Department of Neurology, UCLA Laboratory of Neuro Imaging, David Geffen School of Medicine, Suite 225, 635 Charles Young Drive South, Los Angeles, CA 90095-7334, USA.
Neuroinformatics (Impact Factor: 3.1). 02/2008; 6(2):135-48. DOI: 10.1007/s12021-008-9018-x
Source: PubMed

ABSTRACT Many brain image processing algorithms require one or more well-chosen seed points because they need to be initialized close to an optimal solution. Anatomical point landmarks are useful for constructing initial conditions for these algorithms because they tend to be highly-visible and predictably-located points in brain image scans. We introduce an empirical training procedure that locates user-selected anatomical point landmarks within well-defined precisions using image data with different resolutions and MRI weightings. Our approach makes no assumptions on the structural or intensity characteristics of the images and produces results that have no tunable run-time parameters. We demonstrate the procedure using a Java GUI application (LONI ICE) to determine the MRI weighting of brain scans and to locate features in T1-weighted and T2-weighted scans.

  • [Show abstract] [Hide abstract]
    ABSTRACT: Despite intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation toward the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) for each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) a small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients; and (4) we employ thin-plate splines (TPS) to interpolate a dense initial deformation field by considering all key points as the control points. Thus, the conventional image registration problem becomes much easier in the sense that we only need to compute the remaining small deformation for completing the registration of the subject to the template. Experimental results on both simulated and real data show that the registration performance can be significantly improved after integrating our patch-based deformation prediction framework into the existing registration algorithms.
    NeuroImage 10/2014; 105. DOI:10.1016/j.neuroimage.2014.10.019 · 6.13 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.
    Journal of Cognitive Neuroscience 01/1991; 3(1):71-86. DOI:10.1162/jocn.1991.3.1.71 · 4.69 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this work, we present a novel learning based fiducial driven registration (LeFiR) scheme which utilizes a point matching technique to identify the optimal configuration of landmarks to better recover deformation between a target and a moving image. Moreover, we employ the LeFiR scheme to model the localized nature of deformation introduced by a new treatment modality – laser induced interstitial thermal therapy (LITT) for treating neurological disorders. Magnetic resonance (MR) guided LITT has recently emerged as a minimally invasive alternative to craniotomy for local treatment of brain diseases (such as glioblastoma multiforme (GBM) and epilepsy). However, LITT is currently only practised as an investigational procedure world-wide due to lack of data on longer term patient outcome following LITT. There is thus a need to quantitatively evaluate treatment related changes between post- and pre-LITT in terms of MR imaging markers. In order to validate LeFiR, we tested the scheme on a synthetic brain dataset (SBD) and in two real clinical scenarios for treating GBM and epilepsy with LITT. Four experiments under different deformation profiles simulating localized ablation effects of LITT on MRI were conducted on 286 pairs of SBD images. The training landmark configurations were obtained through 2000 iterations of registration where the points with consistently best registration performance were selected. The estimated landmarks greatly improved the quality metrics compared to a uniform grid (UniG) placement scheme, a speeded-up robust features (SURF) based method, and a scale-invariant feature transform (SIFT) based method as well as a generic free-form deformation (FFD) approach. The LeFiR method achieved average 90% improvement in recovering the local deformation compared to 82% for the uniform grid placement, 62% for the SURF based approach, and 16% for the generic FFD approach. On the real GBM and epilepsy data, the quantitative results showed that LeFiR outperformed UniG by 28% improvement in average.
    Neurocomputing 11/2014; 144:24–37. DOI:10.1016/j.neucom.2013.11.051 · 2.01 Impact Factor


Available from