The Community for Technology Leaders
Green Image
Issue No. 05 - May (2012 vol. 24)
ISSN: 1041-4347
pp: 952-960
David Newman , University of California, Irvine, Irvine
Alexander Ihler , University of California, Irvine, Irvine
ABSTRACT
Latent Dirichlet allocation (LDA) is a popular algorithm for discovering semantic structure in large collections of text or other data. Although its complexity is linear in the data size, its use on increasingly massive collections has created considerable interest in parallel implementations. “Approximate distributed” LDA, or AD-LDA, approximates the popular collapsed Gibbs sampling algorithm for LDA models while running on a distributed architecture. Although this algorithm often appears to perform well in practice, its quality is not well understood theoretically or easily assessed on new data. In this work, we theoretically justify the approximation, and modify AD-LDA to track an error bound on performance. Specifically, we upper bound the probability of making a sampling error at each step of the algorithm (compared to an exact, sequential Gibbs sampler), given the samples drawn thus far. We show empirically that our bound is sufficiently tight to give a meaningful and intuitive measure of approximation error in AD-LDA, allowing the user to track the tradeoff between accuracy and efficiency while executing in parallel.
INDEX TERMS
Data mining, topic model, parallel processing, error analysis.
CITATION
David Newman, Alexander Ihler, "Understanding Errors in Approximate Distributed Latent Dirichlet Allocation", IEEE Transactions on Knowledge & Data Engineering, vol. 24, no. , pp. 952-960, May 2012, doi:10.1109/TKDE.2011.29
111 ms
(Ver )