EEPO: Exploration-Enhanced Policy Optimization via Sample-Then-Forget
By: Liang Chen , Xueting Han , Qizhou Wang and more
Potential Business Impact:
Helps AI learn new things by forgetting and trying again.
Balancing exploration and exploitation remains a central challenge in reinforcement learning with verifiable rewards (RLVR) for large language models (LLMs). Current RLVR methods often overemphasize exploitation, leading to entropy collapse, diminished exploratory capacity, and ultimately limited performance gains. Although techniques that increase policy stochasticity can promote exploration, they frequently fail to escape dominant behavioral modes. This creates a self-reinforcing loop-repeatedly sampling and rewarding dominant modes-that further erodes exploration. We introduce Exploration-Enhanced Policy Optimization (EEPO), a framework that promotes exploration via two-stage rollouts with adaptive unlearning. In the first stage, the model generates half of the trajectories; it then undergoes a lightweight unlearning step to temporarily suppress these sampled responses, forcing the second stage to explore different regions of the output space. This sample-then-forget mechanism disrupts the self-reinforcing loop and promotes wider exploration during rollouts. Across five reasoning benchmarks, EEPO outperforms GRPO, achieving average relative gains of 24.3% on Qwen2.5-3B, 33.0% on Llama3.2-3B-Instruct, and 10.4% on Qwen3-8B-Base.
Similar Papers
Evolutionary Policy Optimization
Machine Learning (CS)
Teaches computers to learn faster and better.
RePO: Replay-Enhanced Policy Optimization
Computation and Language
Makes AI smarter with less computer power.
Explore Data Left Behind in Reinforcement Learning for Reasoning Language Models
Computation and Language
Teaches computers to solve math problems better.