Score: 0

Semantic-Augmented Latent Topic Modeling with LLM-in-the-Loop

Published: July 11, 2025 | arXiv ID: 2507.08498v1

By: Mengze Hong, Chen Jason Zhang, Di Jiang

Potential Business Impact:

Makes computers find better ideas in writing.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Latent Dirichlet Allocation (LDA) is a prominent generative probabilistic model used for uncovering abstract topics within document collections. In this paper, we explore the effectiveness of augmenting topic models with Large Language Models (LLMs) through integration into two key phases: Initialization and Post-Correction. Since the LDA is highly dependent on the quality of its initialization, we conduct extensive experiments on the LLM-guided topic clustering for initializing the Gibbs sampling algorithm. Interestingly, the experimental results reveal that while the proposed initialization strategy improves the early iterations of LDA, it has no effect on the convergence and yields the worst performance compared to the baselines. The LLM-enabled post-correction, on the other hand, achieved a promising improvement of 5.86% in the coherence evaluation. These results highlight the practical benefits of the LLM-in-the-loop approach and challenge the belief that LLMs are always the superior text mining alternative.

Page Count
7 pages

Category
Computer Science:
Computation and Language