LLM-Assisted Topic Reduction for BERTopic on Social Media Data
By: Wannes Janssens, Matthias Bogaert, Dirk Van den Poel
Potential Business Impact:
Cleans up messy text to find clear ideas.
The BERTopic framework leverages transformer embeddings and hierarchical clustering to extract latent topics from unstructured text corpora. While effective, it often struggles with social media data, which tends to be noisy and sparse, resulting in an excessive number of overlapping topics. Recent work explored the use of large language models for end-to-end topic modelling. However, these approaches typically require significant computational overhead, limiting their scalability in big data contexts. In this work, we propose a framework that combines BERTopic for topic generation with large language models for topic reduction. The method first generates an initial set of topics and constructs a representation for each. These representations are then provided as input to the language model, which iteratively identifies and merges semantically similar topics. We evaluate the approach across three Twitter/X datasets and four different language models. Our method outperforms the baseline approach in enhancing topic diversity and, in many cases, coherence, with some sensitivity to dataset characteristics and initial parameter selection.
Similar Papers
TopiCLEAR: Topic extraction by CLustering Embeddings with Adaptive dimensional Reduction
Computation and Language
Finds hidden topics in social media posts.
Enhancing BERTopic with Intermediate Layer Representations
Computation and Language
Finds hidden topics in lots of words.
Creating Targeted, Interpretable Topic Models with LLM-Generated Text Augmentation
Computation and Language
Helps computers find hidden ideas in text.