Long-Chain Reasoning Distillation via Adaptive Prefix Alignment
By: Zhenghao Liu , Zhuoyang Wu , Xinze Li and more
Large Language Models (LLMs) have demonstrated remarkable reasoning capabilities, particularly in solving complex mathematical problems. Recent studies show that distilling long reasoning trajectories can effectively enhance the reasoning performance of small-scale student models. However, teacher-generated reasoning trajectories are often excessively long and structurally complex, making them difficult for student models to learn. This mismatch leads to a gap between the provided supervision signal and the learning capacity of the student model. To address this challenge, we propose Prefix-ALIGNment distillation (P-ALIGN), a framework that fully exploits teacher CoTs for distillation through adaptive prefix alignment. Specifically, P-ALIGN adaptively truncates teacher-generated reasoning trajectories by determining whether the remaining suffix is concise and sufficient to guide the student model. Then, P-ALIGN leverages the teacher-generated prefix to supervise the student model, encouraging effective prefix alignment. Experiments on multiple mathematical reasoning benchmarks demonstrate that P-ALIGN outperforms all baselines by over 3%. Further analysis indicates that the prefixes constructed by P-ALIGN provide more effective supervision signals, while avoiding the negative impact of redundant and uncertain reasoning components. All code is available at https://github.com/NEUIR/P-ALIGN.
Similar Papers
Distilling the Essence: Efficient Reasoning Distillation via Sequence Truncation
Computation and Language
Makes AI learn faster by focusing on thinking steps.
Information-Preserving Reformulation of Reasoning Traces for Antidistillation
Computation and Language
Protects smart computer thinking from being copied.
Reasoning Distillation and Structural Alignment for Improved Code Generation
Artificial Intelligence
Teaches small computers to solve hard coding problems.