Conference Paper
Rates of convergence for the cluster tree.
Conference: Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 69 December 2010, Vancouver, British Columbia, Canada.
Source: DBLP

Article: Enhanced Mode Clustering
[Show abstract] [Hide abstract]
ABSTRACT: Mode clustering is a nonparametric method for clustering that defines clusters using the basins of attraction of a density estimator's modes. We provide several enhancements to mode clustering: (i) a soft variant of cluster assignment, (ii) a measure of connectivity between clusters, (iii) a technique for choosing the bandwidth, (iv) a method for denoising small clusters, and (v) an approach to visualizing the clusters. Combining all these enhancements gives us a useful procedure for clustering in multivariate problems.06/2014;  [Show abstract] [Hide abstract]
ABSTRACT: Based on the work of Hartigan, the clusters of a distribution are often defined to be the connected components of a density level set. Unfortunately, this definition depends on the userspecified level, and in general finding a reasonable level is a difficult task. In addition, the definition is not rigorous for discontinuous densities, since the topological structure of a density level set may be changed by modifying the density on a set of measure zero. In this work, we address these issues by first modifying the notion of density level sets in a way that makes the level sets independent of the actual choice of the density. We then propose a simple algorithm for estimating the smallest level at which the modified level sets have more than one connected component. For this algorithm we provide a finite sample analysis, which is then used to show that the algorithm consistently estimates both the smallest level and the corresponding connected components. We further establish rates of convergence for the two estimation problems, and last but not least, we present a simple strategy for determining the widthparameter of the involved density estimator in a datadepending way. The resulting algorithm turns out to be adaptive, that is, it achieves the optimal rates achievable by our analysis without knowing characteristics of the underlying distribution.09/2014;  [Show abstract] [Hide abstract]
ABSTRACT: For a density $f$ on ${\mathbb R}^d$, a {\it highdensity cluster} is any connected component of $\{x: f(x) \geq \lambda\}$, for some $\lambda > 0$. The set of all highdensity clusters forms a hierarchy called the {\it cluster tree} of $f$. We present two procedures for estimating the cluster tree given samples from $f$. The first is a robust variant of the single linkage algorithm for hierarchical clustering. The second is based on the $k$nearest neighbor graph of the samples. We give finitesample convergence rates for these algorithms which also imply consistency, and we derive lower bounds on the sample complexity of cluster tree estimation. Finally, we study a tree pruning procedure that guarantees, under milder conditions than usual, to remove clusters that are spurious while recovering those that are salient.IEEE Transactions on Information Theory 06/2014; 60(12). · 2.65 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.