Conference Paper

Rates of convergence for the cluster tree.

Conference: Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada.
Source: DBLP
0 Bookmarks
 · 
89 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Mode clustering is a nonparametric method for clustering that defines clusters using the basins of attraction of a density estimator's modes. We provide several enhancements to mode clustering: (i) a soft variant of cluster assignment, (ii) a measure of connectivity between clusters, (iii) a technique for choosing the bandwidth, (iv) a method for denoising small clusters, and (v) an approach to visualizing the clusters. Combining all these enhancements gives us a useful procedure for clustering in multivariate problems.
    06/2014;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Based on the work of Hartigan, the clusters of a distribution are often defined to be the connected components of a density level set. Unfortunately, this definition depends on the user-specified level, and in general finding a reasonable level is a difficult task. In addition, the definition is not rigorous for discontinuous densities, since the topological structure of a density level set may be changed by modifying the density on a set of measure zero. In this work, we address these issues by first modifying the notion of density level sets in a way that makes the level sets independent of the actual choice of the density. We then propose a simple algorithm for estimating the smallest level at which the modified level sets have more than one connected component. For this algorithm we provide a finite sample analysis, which is then used to show that the algorithm consistently estimates both the smallest level and the corresponding connected components. We further establish rates of convergence for the two estimation problems, and last but not least, we present a simple strategy for determining the width-parameter of the involved density estimator in a data-depending way. The resulting algorithm turns out to be adaptive, that is, it achieves the optimal rates achievable by our analysis without knowing characteristics of the underlying distribution.
    09/2014;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: For a density $f$ on ${\mathbb R}^d$, a {\it high-density cluster} is any connected component of $\{x: f(x) \geq \lambda\}$, for some $\lambda > 0$. The set of all high-density clusters forms a hierarchy called the {\it cluster tree} of $f$. We present two procedures for estimating the cluster tree given samples from $f$. The first is a robust variant of the single linkage algorithm for hierarchical clustering. The second is based on the $k$-nearest neighbor graph of the samples. We give finite-sample convergence rates for these algorithms which also imply consistency, and we derive lower bounds on the sample complexity of cluster tree estimation. Finally, we study a tree pruning procedure that guarantees, under milder conditions than usual, to remove clusters that are spurious while recovering those that are salient.
    IEEE Transactions on Information Theory 06/2014; 60(12). · 2.65 Impact Factor

Full-text

Download
0 Downloads
Available from