BARD: budget-aware reasoning distillation
By: Lujie Niu , Lei Shen , Yi Jiang and more
Potential Business Impact:
Teaches AI to think clearly, but faster.
While long Chain-of-Thought (CoT) distillation effectively transfers reasoning capability to smaller language models, the reasoning process often remains redundant and computational budget uncontrollable, leading to inefficient resource usage. To address this limitation, we propose \textbf{Budget-Aware Reasoning Distillation (BARD)}, a novel framework that simultaneously distills reasoning capability and enables fine-grained control over the reasoning length. BARD uses the thinking budget as a user-specified control signal, allowing the model to dynamically balance reasoning performance and computational efficiency. To achieve this concept, BARD introduces a two-phase training regimen. The first phase, Supervised Fine-Tuning (SFT) on teacher-generated long CoT data compressed to various budget levels, bootstrapping the model's understanding of budget constraints. The second phase leverages Reinforcement Learning (RL) from a reward signal in consideration of reasoning performance and budget fidelity simultaneously. Incorporating the two-phase regimen is crucial to avoiding policy degradation and ensuring that both objectives are optimized jointly. Extensive experiments demonstrate that our method empowers an 8B student model to achieve strong performance on challenging reasoning benchmarks (\textit{AIME24, AIME25, GPQA}) while providing precise and adaptive control over its reasoning length across a wide range of budgets.
Similar Papers
From Reasoning LLMs to BERT: A Two-Stage Distillation Framework for Search Relevance
Information Retrieval
Makes online shopping search faster and smarter.
Marco-o1 v2: Towards Widening The Distillation Bottleneck for Reasoning Models
Machine Learning (CS)
Teaches small computers to think better, not overthink.
Beyond Scaling Law: A Data-Efficient Distillation Framework for Reasoning
Machine Learning (CS)
Teaches computers to think better with less data.