We propose a method for characterizing spatial region data. The method efficiently constructs a k-dimensional feature vector using concentric spheres in 3D (circles in 2D) radiating out of a region's center of mass. These signatures capture structural and internal volume properties. We evaluate our approach by performing experiments on classification and similarity searches, using artificial and real datasets. To generate artificial regions we introduce a region growth model. Similarity searches on artificial data demonstrate that our technique, although straightforward, compares favorably to mathematical morphology, while being two orders of magnitude faster. Experiments with real datasets show its effectiveness and general applicability.
This paper presents a flexible framework to build a target-specific, part-based representation for arbitrary articulated or rigid objects. The aim is to successfully track the target object in 2D, through multiple scales and occlusions. This is realized by employing a hierarchical, iterative optimization process on the proposed representation of structure and appearance. Therefore, each rigid part of an object is described by a hierarchical spring system represented by an attributed graph pyramid. Hierarchical spring systems encode the spatial relationships of the features (attributes of the graph pyramid) describing the parts and enforce them by spring-like behavior during tracking. Articulation points connecting the parts of the object allow to transfer position information from reliable to ambiguous parts. Tracking is done in an iterative process by combining the hypotheses of simple trackers with the hypotheses extracted from the hierarchical spring systems.
3D electron microsscopy aims at the reconstruction of density volumes corresponding to the electrostatic potential distribution of macro-molecules. There are many factors limiting the resolution achievable when this technique is applied to biological macromolecules: microscope imperfections, molecule flexibility, lack of projections from certain directions, unknown angular distribution, noise, etc. In this communication we explore the quality gain in the reconstruction by including a priori knowledge such as particle symmetry, occupied volume, known surface relief, density nonnegativity and similarity to a known volume in order to improve the quality of the reconstruction. If the reconstruction is represented as a series expansion, such constraints can be expressed by set of equations that the expansion coefficients must satisfy. In this work, these equation sets are specified and combined in a novel way with the ART + blobs reconstruction algorithm. The effect of each one on the reconstruction of a realistic phantom is explored. Finally, the application of these restrictions to 3D reconstructions from experimental data are studied.
A new approach, based on the hierarchical soft correspondence detection, has been presented for significantly improving the speed of our previous HAMMER image registration algorithm. Currently, HAMMER takes a relative long time, e.g., up to 80 minutes, to register two regular sized images using Linux machine (with 2.40GHz CPU and 2-Gbyte memory). This is because the results of correspondence detection, used to guide the image warping, can be ambiguous in complex structures and thus the image warping has to be conservative and accordingly takes long time to complete. In this paper, a hierarchical soft correspondence detection technique has been employed to detect correspondences more robustly, thereby allowing the image warping to be completed straightforwardly and fast. By incorporating this hierarchical soft correspondence detection technique into the HAMMER registration framework, the robustness and the accuracy of registration (in terms of low average registration error) can be both achieved. Experimental results on real and simulated data show that the new registration algorithm, based the hierarchical soft correspondence detection, can run nine times faster than HAMMER while keeping the similar registration accuracy.
A method for spatio-temporally smooth and consistent estimation of cardiac motion from MR cine sequences is proposed. Myocardial motion is estimated within a 4-dimensional (4D) registration framework, in which all 3D images obtained at different cardiac phases are simultaneously registered. This facilitates spatio-temporally consistent estimation of motion as opposed to other registration-based algorithms which estimate the motion by sequentially registering one frame to another. To facilitate image matching, an attribute vector (AV) is constructed for each point in the image, and is intended to serve as a "morphological signature" of that point. The AV includes intensity, boundary, and geometric moment invariants (GMIs). Hierarchical registration of two image sequences is achieved by using the most distinctive points for initial registration of two sequences and gradually adding less-distinctive points to refine the registration. Experimental results on real data demonstrate good performance of the proposed method for cardiac image registration and motion estimation. The motion estimation is validated via comparisons with motion estimates obtained from MR images with myocardial tagging.
Kidney cancer occurs in both a hereditary (inherited) and sporadic (non-inherited) form. It is estimated that almost a quarter of a million people in the USA are living with kidney cancer and their number increases with 51,000 diagnosed with the disease every year. In clinical practice, the response to treatment is monitored by manual measurements of tumor size, which are 2D, do not reflect the 3D geometry and enhancement of tumors, and show high intra- and inter-operator variability. We propose a computer-assisted radiology tool to assess renal tumors in contrast-enhanced CT for the management of tumor diagnoses and responses to new treatments. The algorithm employs anisotropic diffusion (for smoothing), a combination of fast-marching and geodesic level-sets (for segmentation), and a novel statistical refinement step to adapt to the shape of the lesions. It also quantifies the 3D size, volume and enhancement of the lesion and allows serial management over time. Tumors are robustly segmented and the comparison between manual and semi-automated quantifications shows disparity within the limits of inter-observer variability. The analysis of lesion enhancement for tumor classification shows great separation between cysts, von Hippel-Lindau syndrome lesions and hereditary papillary renal carcinomas (HPRC) with p-values inferior to 0.004. The results on temporal evaluation of tumors from serial scans illustrate the potential of the method to become an important tool for disease monitoring, drug trials and noninvasive clinical surveillance.
Analysis of functional magnetic resonance imaging (fMRI) data in its native, complex form has been shown to increase the sensitivity both for data-driven techniques, such as independent component analysis (ICA), and for model-driven techniques. The promise of an increase in sensitivity and specificity in clinical studies, provides a powerful motivation for utilizing both the phase and magnitude data; however, the unknown and noisy nature of the phase poses a challenge. In addition, many complex-valued analysis algorithms, such as ICA, suffer from an inherent phase ambiguity, which introduces additional difficulty for group analysis. We present solutions for these issues, which have been among the main reasons phase information has been traditionally discarded, and show their effectiveness when used as part of a complex-valued group ICA algorithm application. The methods we present thus allow the development of new fully complex data-driven and semi-blind methods to process, analyze, and visualize fMRI data.We first introduce a phase ambiguity correction scheme that can be either applied subsequent to ICA of fMRI data or can be incorporated into the ICA algorithm in the form of prior information to eliminate the need for further processing for phase correction. We also present a Mahalanobis distance-based thresholding method, which incorporates both magnitude and phase information into a single threshold, that can be used to increase the sensitivity in the identification of voxels of interest. This method shows particular promise for identifying voxels with significant susceptibility changes but that are located in low magnitude (i.e. activation) areas. We demonstrate the performance gain of the introduced methods on actual fMRI data.
This paper provides exact analytical expressions for the first and second moments of the true error for linear discriminant analysis (LDA) when the data are univariate and taken from two stochastic Gaussian processes. The key point is that we assume a general setting in which the sample data from each class do not need to be identically distributed or independent within or between classes. We compare the true errors of designed classifiers under the typical i.i.d. model and when the data are correlated, providing exact expressions and demonstrating that, depending on the covariance structure, correlated data can result in classifiers with either greater error or less error than when training with uncorrelated data. The general theory is applied to autoregressive and moving-average models of the first order, and it is demonstrated using real genomic data.
We describe an annealing procedure that computes the normalized N-cut of a weighted graph G. The first phase transition computes the solution of the approximate normalized 2-cut problem, while the low temperature solution computes the normalized N-cut. The intermediate solutions provide a sequence of refinements of the 2-cut that can be used to split the data to K clusters with 2 </= K </= N. This approach only requires specification of the upper limit on the number of expected clusters N, since by controlling the annealing parameter we can obtain any number of clusters K with 2 </= K </= N. We test the algorithm on an image segmentation problem and apply it to a problem of clustering high dimensional data from the sensory system of a cricket.
Accumulating evidence suggests that characteristics of pre-treatment FDG-PET could be used as prognostic factors to predict outcomes in different cancer sites. Current risk analyses are limited to visual assessment or direct uptake value measurements. We are investigating intensity-volume histogram metrics and shape and texture features extracted from PET images to predict patient's response to treatment. These approaches were demonstrated using datasets from cervix and head and neck cancers, where AUC of 0.76 and 1.0 were achieved, respectively. The preliminary results suggest that the proposed approaches could potentially provide better tools and discriminant power for utilizing functional imaging in clinical prognosis.
In this paper we propose a microcalcification classification scheme, assisted by content-based mammogram retrieval, for breast cancer diagnosis. We recently developed a machine learning approach for mammogram retrieval where the similarity measure between two lesion mammograms was modeled after expert observers. In this work we investigate how to use retrieved similar cases as references to improve the performance of a numerical classifier. Our rationale is that by adaptively incorporating local proximity information into a classifier, it can help to improve its classification accuracy, thereby leading to an improved "second opinion" to radiologists. Our experimental results on a mammogram database demonstrate that the proposed retrieval-driven approach with an adaptive support vector machine (SVM) could improve the classification performance from 0.78 to 0.82 in terms of the area under the ROC curve.
Many attempts have been made to characterize latent structures in "texture spaces" defined by attentive similarity judgments. While an optimal description of perceptual texture space remains elusive, we suggest that the similarity judgments gained from these procedures provide a useful standard for relating image statistics to high-level similarity. In the present experiment, we ask subjects to group natural textures into visually similar clusters. We also represent each image using the features employed by three different parametric texture synthesis models. Given the cluster labels for our textures, we use linear discriminant analysis to predict cluster membership. We compare each model's assignments to human data for both positive and contrast-negated textures, and evaluate relative model performance.
With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods.
Many problems in paleontology reduce to finding those features that best discriminate among a set of classes. A clear example is the classification of new specimens. However, these classifications are generally challenging because the number of discriminant features and the number of samples are limited. This has been the fate of LB1, a new specimen found in the Liang Bua Cave of Flores. Several authors have attributed LB1 to a new species of Homo, H. floresiensis. According to this hypothesis, LB1 is either a member of the early Homo group or a descendent of an ancestor of the Asian H. erectus. Detractors have put forward an alternate hypothesis, which stipulates that LB1 is in fact a microcephalic modern human. In this paper, we show how we can employ a new Bayes optimal discriminant feature extraction technique to help resolve this type of issues. In this process, we present three types of experiments. First, we use this Bayes optimal discriminant technique to develop a model of morphological (shape) evolution from Australopiths to H. sapiens. LB1 fits perfectly in this model as a member of the early Homo group. Second, we build a classifier based on the available cranial and mandibular data appropriately normalized for size and volume. Again, LB1 is most similar to early Homo. Third, we build a brain endocast classifier to show that LB1 is not within the normal range of variation in H. sapiens. These results combined support the hypothesis of a very early shared ancestor for LB1 and H. erectus, and illustrate how discriminant analysis approaches can be successfully used to help classify newly discovered specimens.
Identifying and validating novel phenotypes from images inputting online is a major challenge against high-content RNA interference (RNAi) screening. Newly discovered phenotypes should be visually distinct from existing ones and make biological sense. An online phenotype discovery method featuring adaptive phenotype modeling and iterative cluster merging using improved gap statistics is proposed. Clustering results based on compactness criteria and Gaussian mixture models (GMM) for existing phenotypes iteratively modify each other by multiple hypothesis test and model optimization based on minimum classification error (MCE). The method works well on discovering new phenotypes adaptively when applied to both of synthetic datasets and RNAi high content screen (HCS) images with ground truth labels.
This paper proposes a new approach based on missing value pattern discovery for classifying incomplete data. This approach is particularly designed for classification of datasets with a small number of samples and a high percentage of missing values where available missing value treatment approaches do not usually work well. Based on the pattern of the missing values, the proposed approach finds subsets of samples for which most of the features are available and trains a classifier for each subset. Then, it combines the outputs of the classifiers. Subset selection is translated into a clustering problem, allowing derivation of a mathematical framework for it. A trade off is established between the computational complexity (number of subsets) and the accuracy of the overall classifier. To deal with this trade off, a numerical criterion is proposed for the prediction of the overall performance. The proposed method is applied to seven datasets from the popular University of California, Irvine data mining archive and an epilepsy dataset from Henry Ford Hospital, Detroit, Michigan (total of eight datasets). Experimental results show that classification accuracy of the proposed method is superior to those of the widely used multiple imputations method and four other methods. They also show that the level of superiority depends on the pattern and percentage of missing values.
This paper proposes a new nonlinear classifier based on a generalized Choquet integral with signed fuzzy measures to enhance the classification accuracy and power by capturing all possible interactions among two or more attributes. This generalized approach was developed to address unsolved Choquet-integral classification issues such as allowing for flexible location of projection lines in n-dimensional space, automatic search for the least misclassification rate based on Choquet distance, and penalty on misclassified points. A special genetic algorithm is designed to implement this classification optimization with fast convergence. Both the numerical experiment and empirical case studies show that this generalized approach improves and extends the functionality of this Choquet nonlinear classification in more real-world multi-class multi-dimensional situations.
An ensemble of clustering solutions or partitions may be generated for a number of reasons. If the data set is very large, clustering may be done on tractable size disjoint subsets. The data may be distributed at different sites for which a distributed clustering solution with a final merging of partitions is a natural fit. In this paper, two new approaches to combining partitions, represented by sets of cluster centers, are introduced. The advantage of these approaches is that they provide a final partition of data that is comparable to the best existing approaches, yet scale to extremely large data sets. They can be 100,000 times faster while using much less memory. The new algorithms are compared against the best existing cluster ensemble merging approaches, clustering all the data at once and a clustering algorithm designed for very large data sets. The comparison is done for fuzzy and hard k-means based clustering algorithms. It is shown that the centroid-based ensemble merging algorithms presented here generate partitions of quality comparable to the best label vector approach or clustering all the data at once, while providing very large speedups.
CT Colonography (CTC) is an emerging minimally invasive technique for screening and diagnosing colon cancers. Computer Aided Detection (CAD) techniques can increase sensitivity and reduce false positives. Inspired by the way radiologists detect polyps via 3D virtual fly-through in CTC, we borrowed the idea from geographic information systems to employ topographical height map in colonic polyp measurement and false positive reduction. After a curvature based filtering and a 3D CT feature classifier, a height map is computed for each detection using a ray-casting algorithm. We design a concentric index to characterize the concentric pattern in polyp height map based on the fact that polyps are protrusions from the colon wall and round in shape. The height map is optimized through a multi-scale spiral spherical search to maximize the concentric index. We derive several topographic features from the map and compute texture features based on wavelet decomposition. We then send the features to a committee of support vector machines for classification. We have trained our method on 394 patients (71 polyps) and tested it on 792 patients (226 polyps). Results showed that we can achieve 95% sensitivity at 2.4 false positives per patient and the height map features can reduce false positives by more than 50%. We compute the polyp height and width measurements and correlate them with manual measurements. The Pearson correlations are 0.74 (p=0.11) and 0.75 (p=0.17) for height and width, respectively.
We propose an approach to shape detection of highly deformable shapes in images via manifold learning with regression. Our method does not require shape key points be defined at high contrast image regions, nor do we need an initial estimate of the shape. We only require sufficient representative training data and a rough initial estimate of the object position and scale. We demonstrate the method for face shape learning, and provide a comparison to nonlinear Active Appearance Model. Our method is extremely accurate, to nearly pixel precision and is capable of accurately detecting the shape of faces undergoing extreme expression changes. The technique is robust to occlusions such as glasses and gives reasonable results for extremely degraded image resolutions.
Deformable shape detection is an important problem in computer vision and pattern recognition. However, standard detectors are typically limited to locating only a few salient landmarks such as landmarks near edges or areas of high contrast, often conveying insufficient shape information. This paper presents a novel statistical pattern recognition approach to locate a dense set of salient and non-salient landmarks in images of a deformable object. We explore the fact that several object classes exhibit a homogeneous structure such that each landmark position provides some information about the position of the other landmarks. In our model, the relationship between all pairs of landmarks is naturally encoded as a probabilistic graph. Dense landmark detections are then obtained with a new sampling algorithm that, given a set of candidate detections, selects the most likely positions as to maximize the probability of the graph. Our experimental results demonstrate accurate, dense landmark detections within and across different databases.
This paper presents an approach to recognizing two-dimensional multiscale objects on a reconfigurable mesh architecture with horizontal and vertical broadcasting. The object models are described in terms of a convex/concave multiscale boundary decomposition that is represented by a tree structure. The problem of matching an observed object against a model is formulated as a tree matching problem. A parallel dynamic programming solution to this problem is presented that requires O(max(n,m)) time on n×m reconfigurable mesh, where n and m are the sizes of the two trees
A new method is presented for adaptive document image binarization, where the page is considered as a collection of subcomponents such as text, background and picture. The problems caused by noise, illumination and many source type related degradations are addressed. The algorithm uses document characteristics to determine (surface) attributes, often used in document segmentation. Using characteristic analysis, two new algorithms are applied to determine a local threshold for each pixel. An algorithm based on soft decision control is used for thresholding the background and picture regions. An approach utilizing local mean and variance of gray values is applied to textual regions. Tests were performed with images including different types of document components and degradations. The results show that the method adapts and performs well in each case
This paper describes a new approach to adaptive digital halftoning with the least squares model-based (LSMB) method. A framework is presented for the adaptive control of smoothness and sharpness of the halftone patterns according to local image characteristics. The proposed method employs explicit, quantitative models of the human visual system represented as 2D linear filters (eye filters). In contrast with the standard LSMB method where the single eye filter is employed uniformly over the image, the model parameters are controlled according to local image characteristics for each pixel. Because of the adaptive selection of eye filters for the pixels, image enhancement is incorporated into the halftoning process. Effectiveness of the proposed approach is demonstrated through experiments using real data compared with the error-diffusion algorithm and the standard LSMB method