Efficient Long CoT Reasoning in Small Language Models
By: Zhaoyang Wang , Jinqi Jiang , Tian Qiu and more
Potential Business Impact:
Teaches small computers to think through hard problems.
Recent large reasoning models such as DeepSeek-R1 exhibit strong complex problems solving abilities by generating long chain-of-thought (CoT) reasoning steps. It is challenging to directly train small language models (SLMs) to emerge long CoT. Thus, distillation becomes a practical method to enable SLMs for such reasoning ability. However, the long CoT often contains a lot of redundant contents (e.g., overthinking steps) which may make SLMs hard to learn considering their relatively poor capacity and generalization. To address this issue, we propose a simple-yet-effective method to prune unnecessary steps in long CoT, and then employ an on-policy method for the SLM itself to curate valid and useful long CoT training data. In this way, SLMs can effectively learn efficient long CoT reasoning and preserve competitive performance at the same time. Experimental results across a series of mathematical reasoning benchmarks demonstrate the effectiveness of the proposed method in distilling long CoT reasoning ability into SLMs which maintains the competitive performance but significantly reduces generating redundant reasoning steps.
Similar Papers
Deconstructing Long Chain-of-Thought: A Structured Reasoning Optimization Framework for Long CoT Distillation
Artificial Intelligence
Teaches computers to think better, step-by-step.
The Challenge of Teaching Reasoning to LLMs Without RL or Distillation
Artificial Intelligence
Teaches computers to think step-by-step to solve problems.
Unveiling the Key Factors for Distilling Chain-of-Thought Reasoning
Computation and Language
Makes small AI think like big AI.