EDGE-GRPO: Entropy-Driven GRPO with Guided Error Correction for Advantage Diversity
By: Xingjian Zhang , Siwei Wen , Wenjun Wu and more
Potential Business Impact:
Teaches computers to think better step-by-step.
Large Language Models (LLMs) have made remarkable progress in enhancing step-by-step reasoning through reinforcement learning. However, the Group Relative Policy Optimization (GRPO) algorithm, which relies on sparse reward rules, often encounters the issue of identical rewards within groups, leading to the advantage collapse problem. Existing works typically address this challenge from two perspectives: enforcing model reflection to enhance response diversity, and introducing internal feedback to augment the training signal (advantage). In this work, we begin by analyzing the limitations of model reflection and investigating the policy entropy of responses at the fine-grained sample level. Based on our experimental findings, we propose the EDGE-GRPO algorithm, which adopts \textbf{E}ntropy-\textbf{D}riven Advantage and \textbf{G}uided \textbf{E}rror Correction to effectively mitigate the problem of advantage collapse. Extensive experiments on several main reasoning benchmarks demonstrate the effectiveness and superiority of our approach. It is available at https://github.com/ZhangXJ199/EDGE-GRPO.
Similar Papers
Multi-Layer GRPO: Enhancing Reasoning and Self-Correction in Large Language Models
Machine Learning (CS)
Teaches computers to fix their own mistakes.
SEED-GRPO: Semantic Entropy Enhanced GRPO for Uncertainty-Aware Policy Optimization
Artificial Intelligence
Teaches AI to learn better from questions it's unsure about.
NGRPO: Negative-enhanced Group Relative Policy Optimization
Machine Learning (CS)
Teaches AI to learn from mistakes better.