Quantifying consistency and accuracy of Latent Dirichlet Allocation
By: Saranzaya Magsarjav , Melissa Humphries , Jonathan Tuke and more
Potential Business Impact:
Finds real topics in messy text data.
Topic modelling in Natural Language Processing uncovers hidden topics in large, unlabelled text datasets. It is widely applied in fields such as information retrieval, content summarisation, and trend analysis across various disciplines. However, probabilistic topic models can produce different results when rerun due to their stochastic nature, leading to inconsistencies in latent topics. Factors like corpus shuffling, rare text removal, and document elimination contribute to these variations. This instability affects replicability, reliability, and interpretation, raising concerns about whether topic models capture meaningful topics or just noise. To address these problems, we defined a new stability measure that incorporates accuracy and consistency and uses the generative properties of LDA to generate a new corpus with ground truth. These generated corpora are run through LDA 50 times to determine the variability in the output. We show that LDA can correctly determine the underlying number of topics in the documents. We also find that LDA is more internally consistent, as the multiple reruns return similar topics; however, these topics are not the true topics.
Similar Papers
Topic Analysis with Side Information: A Neural-Augmented LDA Approach
Machine Learning (CS)
Helps computers understand topics using extra clues.
Topic Analysis with Side Information: A Neural-Augmented LDA Approach
Machine Learning (CS)
Helps computers understand topics using extra clues.
Analyzing Political Text at Scale with Online Tensor LDA
Machine Learning (CS)
Lets computers understand huge amounts of text fast.