LLMBoost: Make Large Language Models Stronger with Boosting
By: Zehao Chen , Tianxiang Ai , Yifei Li and more
Ensemble learning of LLMs has emerged as a promising alternative to enhance performance, but existing approaches typically treat models as black boxes, combining the inputs or final outputs while overlooking the rich internal representations and interactions across models.In this work, we introduce LLMBoost, a novel ensemble fine-tuning framework that breaks this barrier by explicitly leveraging intermediate states of LLMs. Inspired by the boosting paradigm, LLMBoost incorporates three key innovations. First, a cross-model attention mechanism enables successor models to access and fuse hidden states from predecessors, facilitating hierarchical error correction and knowledge transfer. Second, a chain training paradigm progressively fine-tunes connected models with an error-suppression objective, ensuring that each model rectifies the mispredictions of its predecessor with minimal additional computation. Third, a near-parallel inference paradigm design pipelines hidden states across models layer by layer, achieving inference efficiency approaching single-model decoding. We further establish the theoretical foundations of LLMBoost, proving that sequential integration guarantees monotonic improvements under bounded correction assumptions. Extensive experiments on commonsense reasoning and arithmetic reasoning tasks demonstrate that LLMBoost consistently boosts accuracy while reducing inference latency.
Similar Papers
Forecasting Credit Ratings: A Case Study where Traditional Methods Outperform Generative LLMs
Risk Management
Helps predict company money health better.
Harnessing Multiple Large Language Models: A Survey on LLM Ensemble
Computation and Language
Combines smart computer brains for better answers.
Leveraging the true depth of LLMs
Machine Learning (CS)
Makes AI answer questions faster without losing smarts.