LabelFusion: Learning to Fuse LLMs and Transformer Classifiers for Robust Text Classification
By: Michael Schlee , Christoph Weisser , Timo Kivimäki and more
Potential Business Impact:
Combines AI brains for smarter text sorting.
LabelFusion is a fusion ensemble for text classification that learns to combine a traditional transformer-based classifier (e.g., RoBERTa) with one or more Large Language Models (LLMs such as OpenAI GPT, Google Gemini, or DeepSeek) to deliver accurate and cost-aware predictions across multi-class and multi-label tasks. The package provides a simple high-level interface (AutoFusionClassifier) that trains the full pipeline end-to-end with minimal configuration, and a flexible API for advanced users. Under the hood, LabelFusion integrates vector signals from both sources by concatenating the ML backbone's embeddings with the LLM-derived per-class scores -- obtained through structured prompt-engineering strategies -- and feeds this joint representation into a compact multi-layer perceptron (FusionMLP) that produces the final prediction. This learned fusion approach captures complementary strengths of LLM reasoning and traditional transformer-based classifiers, yielding robust performance across domains -- achieving 92.4% accuracy on AG News and 92.3% on 10-class Reuters 21578 topic classification -- while enabling practical trade-offs between accuracy, latency, and cost.
Similar Papers
ClusterFusion: Hybrid Clustering with Embedding Guidance and LLM Adaptation
Computation and Language
Helps computers group words by meaning better.
A Theoretically Grounded Hybrid Ensemble for Reliable Detection of LLM-Generated Text
Computation and Language
Finds fake writing in schoolwork better.
LLM-Guided Probabilistic Fusion for Label-Efficient Document Layout Analysis
CV and Pattern Recognition
Helps computers understand document layouts with less data.