Topology-Based Kernels With Application to Inference Problems in Alzheimer's Disease

Alzheimer’s Disease Neuroimaging Initiative and Department of Computer Sciences, University of Wisconsin-Madison, Madison, WI 53706, USA.
IEEE transactions on medical imaging 04/2011; 30(10):1760-70. DOI: 10.1109/TMI.2011.2147327
Source: PubMed


Alzheimer's disease (AD) research has recently witnessed a great deal of activity focused on developing new statistical learning tools for automated inference using imaging data. The workhorse for many of these techniques is the support vector machine (SVM) framework (or more generally kernel-based methods). Most of these require, as a first step, specification of a kernel matrix K between input examples (i.e., images). The inner product between images I(i) and I(j) in a feature space can generally be written in closed form and so it is convenient to treat K as "given." However, in certain neuroimaging applications such an assumption becomes problematic. As an example, it is rather challenging to provide a scalar measure of similarity between two instances of highly attributed data such as cortical thickness measures on cortical surfaces. Note that cortical thickness is known to be discriminative for neurological disorders, so leveraging such information in an inference framework, especially within a multi-modal method, is potentially advantageous. But despite being clinically meaningful, relatively few works have successfully exploited this measure for classification or regression. Motivated by these applications, our paper presents novel techniques to compute similarity matrices for such topologically-based attributed data. Our ideas leverage recent developments to characterize signals (e.g., cortical thickness) motivated by the persistence of their topological features, leading to a scheme for simple constructions of kernel matrices. As a proof of principle, on a dataset of 356 subjects from the Alzheimer's Disease Neuroimaging Initiative study, we report good performance on several statistical inference tasks without any feature selection, dimensionality reduction, or parameter tuning.

Download full-text


Available from: Deepti Pachauri, Jul 31, 2014
  • Source
    • "These feature vectors are then understood as existing in a probability space. Like the approach discussed in [17], the authors of [18] focus on 0-dimensional homology in the context of sublevel set persistence, but in this case, the PD is transformed into a vector representation by computing the kernel density estimate on a uniform grid. Our approach, developed concurrently and independently of those mentioned above, provides a representation of the information in a PD that allows for flexibility in its definition. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Many datasets can be viewed as a noisy sampling of an underlying topological space. Topological data analysis aims to understand and exploit this underlying structure for the purpose of knowledge discovery. A fundamental tool of the discipline is persistent homology, which captures underlying data-driven, scale-dependent homological information. A representation in a "persistence diagram" concisely summarizes this information. By giving the space of persistence diagrams a metric structure, a class of effective machine learning techniques can be applied. We modify the persistence diagram to a "persistence image" in a manner that allows the use of a wider set of distance measures and extends the list of tools from machine learning which can be utilized. It is shown that several machine learning techniques, applied to persistence images for classification tasks, yield high accuracy rates on multiple data sets. Furthermore, these same machine learning techniques fare better when applied to persistence images than when applied to persistence diagrams. We discuss sensitivity of the classification accuracy to the parameters associated to the approach. An application of persistence image based classification to a data set arising from applied dynamical systems is presented to further illustrate.
  • Source
    • "With the same advances in modern computing technology that allow for the storage of large datasets, persistent homology and its variants can be implemented. Features derived from persistent homology have recently been found useful for classification of hepatic lesions (Adcock et al., 2014) and persistent homology has been applied for the analysis of structural brain images (Chung et al., 2009; Pachauri et al., 2011). Outside the arena of medical applications, Sethares and Budney (2013) use persistent homology to study topological structures in musical data. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Brain decoding involves the determination of a subject's cognitive state or an associated stimulus from functional neuroimaging data measuring brain activity. In this setting the cognitive state is typically characterized by an element of a finite set, and the neuroimaging data comprise voluminous amounts of spatiotemporal data measuring some aspect of the neural signal. The associated statistical problem is one of classification from high-dimensional data. We explore the use of functional principal component analysis, mutual information networks, and persistent homology for examining the data through exploratory analysis and for constructing features characterizing the neural signal for brain decoding. We review each approach from this perspective, and we incorporate the features into a classifier based on symmetric multinomial logistic regression with elastic net regularization. The approaches are illustrated in an application where the task is to infer, from brain activity measured with magnetoencephalography (MEG), the type of video stimulus shown to a subject.
  • Source
    • "Yet, the step of training the classifier with topological information is typically done in a rather adhoc manner. In [23] for instance, the persistence diagram is first rasterized on a regular grid, then a kernel-density estimate is computed, and eventually the vectorized discrete probability density function is used as a feature vector to train a SVM using standard kernels for R n . It is however unclear how the resulting kernel-induced distance behaves with respect to existing metrics (e.g., bottleneck or Wasserstein distance) and how properties such as stability are affected. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Topological data analysis offers a rich source of valuable information to study vision problems. Yet, so far we lack a theoretically sound connection to popular kernel-based learning techniques, such as kernel SVMs or kernel PCA. In this work, we establish such a connection by designing a multi-scale kernel for persistence diagrams, a stable summary representation of topological features in data. We show that this kernel is positive definite and prove its stability with respect to the 1-Wasserstein distance. Experiments on two benchmark datasets for 3D shape classification/retrieval and texture recognition show considerable performance gains of the proposed method compared to an alternative approach that is based on the recently introduced persistence landscapes.
Show more