The Community for Technology Leaders
Green Image
We provide evidence that nonlinear dimensionality reduction, clustering, and data set parameterization can be solved within one and the same framework. The main idea is to define a system of coordinates with an explicit metric that reflects the connectivity of a given data set and that is robust to noise. Our construction, which is based on a Markov random walk on the data, offers a general scheme of simultaneously reorganizing and subsampling graphs and arbitrarily shaped data sets in high dimensions using intrinsic geometry. We show that clustering in embedding spaces is equivalent to compressing operators. The objective of data partitioning and clustering is to coarse-grain the random walk on the data while at the same time preserving a diffusion operator for the intrinsic geometry or connectivity of the data set up to some accuracy. We show that the quantization distortion in diffusion space bounds the error of compression of the operator, thus giving a rigorous justification for k-means clustering in diffusion space and a precise measure of the performance of general clustering algorithms.
Machine learning, text analysis, knowledge retrieval, quantization, graph-theoretic methods, compression (coding), clustering, clustering similarity measures, information visualization, Markov processes, graph algorithms.

A. B. Lee and S. Lafon, "Diffusion Maps and Coarse-Graining: A Unified Framework for Dimensionality Reduction, Graph Partitioning, and Data Set Parameterization," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 28, no. , pp. 1393-1403, 2006.
93 ms
(Ver 3.3 (11022016))