ClusterFusion: Hybrid Clustering with Embedding Guidance and LLM Adaptation
By: Yiming Xu , Yuan Yuan , Vijay Viswanathan and more
Potential Business Impact:
Helps computers group words by meaning better.
Text clustering is a fundamental task in natural language processing, yet traditional clustering algorithms with pre-trained embeddings often struggle in domain-specific contexts without costly fine-tuning. Large language models (LLMs) provide strong contextual reasoning, yet prior work mainly uses them as auxiliary modules to refine embeddings or adjust cluster boundaries. We propose ClusterFusion, a hybrid framework that instead treats the LLM as the clustering core, guided by lightweight embedding methods. The framework proceeds in three stages: embedding-guided subset partition, LLM-driven topic summarization, and LLM-based topic assignment. This design enables direct incorporation of domain knowledge and user preferences, fully leveraging the contextual adaptability of LLMs. Experiments on three public benchmarks and two new domain-specific datasets demonstrate that ClusterFusion not only achieves state-of-the-art performance on standard tasks but also delivers substantial gains in specialized domains. To support future work, we release our newly constructed dataset and results on all benchmarks.
Similar Papers
LLM-MemCluster: Empowering Large Language Models with Dynamic Memory for Text Clustering
Computation and Language
Lets computers group words by meaning better.
Layer-Aware Embedding Fusion for LLMs in Text Classifications
Computation and Language
Improves AI understanding by mixing word meanings.
LLM-as-classifier: Semi-Supervised, Iterative Framework for Hierarchical Text Classification using Large Language Models
Computation and Language
Makes smart computer programs sort text better.