Score: 0

Efficient Long CoT Reasoning in Small Language Models

Published: May 24, 2025 | arXiv ID: 2505.18440v2

By: Zhaoyang Wang , Jinqi Jiang , Tian Qiu and more

Potential Business Impact:

Teaches small computers to think through hard problems.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent large reasoning models such as DeepSeek-R1 exhibit strong complex problems solving abilities by generating long chain-of-thought (CoT) reasoning steps. It is challenging to directly train small language models (SLMs) to emerge long CoT. Thus, distillation becomes a practical method to enable SLMs for such reasoning ability. However, the long CoT often contains a lot of redundant contents (e.g., overthinking steps) which may make SLMs hard to learn considering their relatively poor capacity and generalization. To address this issue, we propose a simple-yet-effective method to prune unnecessary steps in long CoT, and then employ an on-policy method for the SLM itself to curate valid and useful long CoT training data. In this way, SLMs can effectively learn efficient long CoT reasoning and preserve competitive performance at the same time. Experimental results across a series of mathematical reasoning benchmarks demonstrate the effectiveness of the proposed method in distilling long CoT reasoning ability into SLMs which maintains the competitive performance but significantly reduces generating redundant reasoning steps.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
14 pages

Category
Computer Science:
Computation and Language