Score: 0

Quantifying consistency and accuracy of Latent Dirichlet Allocation

Published: November 17, 2025 | arXiv ID: 2511.12850v1

By: Saranzaya Magsarjav , Melissa Humphries , Jonathan Tuke and more

Potential Business Impact:

Finds real topics in messy text data.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Topic modelling in Natural Language Processing uncovers hidden topics in large, unlabelled text datasets. It is widely applied in fields such as information retrieval, content summarisation, and trend analysis across various disciplines. However, probabilistic topic models can produce different results when rerun due to their stochastic nature, leading to inconsistencies in latent topics. Factors like corpus shuffling, rare text removal, and document elimination contribute to these variations. This instability affects replicability, reliability, and interpretation, raising concerns about whether topic models capture meaningful topics or just noise. To address these problems, we defined a new stability measure that incorporates accuracy and consistency and uses the generative properties of LDA to generate a new corpus with ground truth. These generated corpora are run through LDA 50 times to determine the variability in the output. We show that LDA can correctly determine the underlying number of topics in the documents. We also find that LDA is more internally consistent, as the multiple reruns return similar topics; however, these topics are not the true topics.

Country of Origin
🇦🇺 Australia

Page Count
8 pages

Category
Computer Science:
Computation and Language