Semantic-Augmented Latent Topic Modeling with LLM-in-the-Loop
By: Mengze Hong, Chen Jason Zhang, Di Jiang
Potential Business Impact:
Makes computers find better ideas in writing.
Latent Dirichlet Allocation (LDA) is a prominent generative probabilistic model used for uncovering abstract topics within document collections. In this paper, we explore the effectiveness of augmenting topic models with Large Language Models (LLMs) through integration into two key phases: Initialization and Post-Correction. Since the LDA is highly dependent on the quality of its initialization, we conduct extensive experiments on the LLM-guided topic clustering for initializing the Gibbs sampling algorithm. Interestingly, the experimental results reveal that while the proposed initialization strategy improves the early iterations of LDA, it has no effect on the convergence and yields the worst performance compared to the baselines. The LLM-enabled post-correction, on the other hand, achieved a promising improvement of 5.86% in the coherence evaluation. These results highlight the practical benefits of the LLM-in-the-loop approach and challenge the belief that LLMs are always the superior text mining alternative.
Similar Papers
Quantifying consistency and accuracy of Latent Dirichlet Allocation
Computation and Language
Finds real topics in messy text data.
Topic Analysis with Side Information: A Neural-Augmented LDA Approach
Machine Learning (CS)
Helps computers understand topics using extra clues.
Topic Analysis with Side Information: A Neural-Augmented LDA Approach
Machine Learning (CS)
Helps computers understand topics using extra clues.