Score: 1

EvoCoT: Overcoming the Exploration Bottleneck in Reinforcement Learning

Published: August 11, 2025 | arXiv ID: 2508.07809v3

By: Huanyu Liu , Jia Li , Chang Yu and more

Potential Business Impact:

Helps computers solve harder problems by learning step-by-step.

Reinforcement learning with verifiable reward (RLVR) has become a promising paradigm for post-training large language models (LLMs) to improve their reasoning capability. However, when the rollout accuracy is low on hard problems, the reward becomes sparse, limiting learning efficiency and causing exploration bottlenecks. Existing approaches either rely on teacher models for distillation or filter out difficult problems, which limits scalability or restricts reasoning improvement through exploration. We propose EvoCoT, a self-evolving curriculum learning framework based on two-stage chain-of-thought (CoT) reasoning optimization. EvoCoT constrains the exploration space by self-generating and verifying CoT trajectories, then gradually shortens CoT steps to expand the space in a controlled way. The framework enables LLMs to stably learn from initially unsolved hard problems under sparse rewards. We apply EvoCoT to multiple LLM families, including Qwen, DeepSeek, and Llama. Experiments show that EvoCoT enables LLMs to solve previously unsolved problems, improves reasoning capability without external CoT supervision, and is compatible with various RL fine-tuning methods. We release the source code to support future research.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)