EvoCoT: Overcoming the Exploration Bottleneck in Reinforcement Learning
By: Huanyu Liu , Jia Li , Chang Yu and more
Potential Business Impact:
Helps computers solve harder problems by learning step-by-step.
Reinforcement learning with verifiable reward (RLVR) has become a promising paradigm for post-training large language models (LLMs) to improve their reasoning capability. However, when the rollout accuracy is low on hard problems, the reward becomes sparse, limiting learning efficiency and causing exploration bottlenecks. Existing approaches either rely on teacher models for distillation or filter out difficult problems, which limits scalability or restricts reasoning improvement through exploration. We propose EvoCoT, a self-evolving curriculum learning framework based on two-stage chain-of-thought (CoT) reasoning optimization. EvoCoT constrains the exploration space by self-generating and verifying CoT trajectories, then gradually shortens CoT steps to expand the space in a controlled way. The framework enables LLMs to stably learn from initially unsolved hard problems under sparse rewards. We apply EvoCoT to multiple LLM families, including Qwen, DeepSeek, and Llama. Experiments show that EvoCoT enables LLMs to solve previously unsolved problems, improves reasoning capability without external CoT supervision, and is compatible with various RL fine-tuning methods. We release the source code to support future research.
Similar Papers
EvoCoT: Overcoming the Exploration Bottleneck in Reinforcement Learning
Machine Learning (CS)
Teaches computers to solve harder problems.
CoT-Evo: Evolutionary Distillation of Chain-of-Thought for Scientific Reasoning
Computation and Language
Teaches computers to reason better in science.
CoT-Evo: Evolutionary Distillation of Chain-of-Thought for Scientific Reasoning
Computation and Language
Teaches computers to solve science problems better.