The Community for Technology Leaders
2010 IEEE International Conference on Data Mining (2010)
Sydney, Australia
Dec. 13, 2010 to Dec. 17, 2010
ISSN: 1550-4786
ISBN: 978-0-7695-4256-0
pp: 148-157
Understanding how topics within a document evolve over its structure is an interesting and important problem. In this paper, we address this problem by presenting a novel variant of Latent Dirichlet Allocation (LDA): Sequential LDA (SeqLDA). This variant directly considers the underlying sequential structure, {\it i.e.}, a document consists of multiple segments ({\it e.g.}, chapters, paragraphs), each of which is correlated to its previous and subsequent segments. In our model, a document and its segments are modelled as random mixtures of the same set of latent topics, each of which is a distribution over words, and the topic distribution of each segment depends on that of its previous segment, the one for first segment will depend on the document topic distribution. The progressive dependency is captured by using the nested two-parameter Poisson Dirichlet process (PDP). We develop an efficient collapsed Gibbs sampling algorithm to sample from the posterior of the PDP. Our experimental results on patent documents show that by taking into account the sequential structure within a document, our SeqLDA model has a higher fidelity over LDA in terms of perplexity (a standard measure of dictionary-based compressibility). The SeqLDA model also yields a nicer sequential topic structure than LDA, as we show in experiments on books such as Melville's "The Whale''.
Latent Dirichlet Allocation, Poisson-Dirichlet process, collapsed Gibbs sampler, document structure

H. Jin, W. L. Buntine and L. Du, "Sequential Latent Dirichlet Allocation: Discover Underlying Topic Structures within a Document," 2010 IEEE International Conference on Data Mining(ICDM), Sydney, Australia, 2010, pp. 148-157.
94 ms
(Ver 3.3 (11022016))