Score: 3

Expanding Reasoning Potential in Foundation Model by Learning Diverse Chains of Thought Patterns

Published: September 25, 2025 | arXiv ID: 2509.21124v1

By: Xuemiao Zhang , Can Ren , Chengying Tu and more

BigTech Affiliations: Meituan

Potential Business Impact:

Teaches computers to solve math problems better.

Business Areas:
A/B Testing Data and Analytics

Recent progress in large reasoning models for challenging mathematical reasoning has been driven by reinforcement learning (RL). Incorporating long chain-of-thought (CoT) data during mid-training has also been shown to substantially improve reasoning depth. However, current approaches often utilize CoT data indiscriminately, leaving open the critical question of which data types most effectively enhance model reasoning capabilities. In this paper, we define the foundation model's reasoning potential for the first time as the inverse of the number of independent attempts required to correctly answer the question, which is strongly correlated with the final model performance. We then propose utilizing diverse data enriched with high-value reasoning patterns to expand the reasoning potential. Specifically, we abstract atomic reasoning patterns from CoT sequences, characterized by commonality and inductive capabilities, and use them to construct a core reference set enriched with valuable reasoning patterns. Furthermore, we propose a dual-granularity algorithm involving chains of reasoning patterns and token entropy, efficiently selecting high-value CoT data (CoTP) from the data pool that aligns with the core set, thereby training models to master reasoning effectively. Only 10B-token CoTP data enables the 85A6B Mixture-of-Experts (MoE) model to improve by 9.58% on the challenging AIME 2024 and 2025, and to raise the upper bound of downstream RL performance by 7.81%.

Country of Origin
🇨🇳 China


Page Count
28 pages

Category
Computer Science:
Artificial Intelligence