Mitigating Think-Answer Mismatch in LLM Reasoning Through Noise-Aware Advantage Reweighting
By: Si Shen , Peijun Shen , Wenhua Zhao and more
Potential Business Impact:
Makes AI better at math by fixing bad answers.
Group-Relative Policy Optimization (GRPO) is a key technique for training large reasoning models, yet it suffers from a critical vulnerability: the \emph{Think-Answer Mismatch}, where noisy reward signals corrupt the learning process. This problem is most severe in unbalanced response groups, paradoxically degrading the signal precisely when it should be most informative. To address this challenge, we propose Stable Group-Relative Policy Optimization (S-GRPO), a principled enhancement that derives optimal, noise-aware advantage weights to stabilize training. Our comprehensive experiments on mathematical reasoning benchmarks demonstrate S-GRPO's effectiveness and robustness. On various models, S-GRPO significantly outperforms DR. GRPO, achieving performance gains of +2.5% on Qwen-Math-7B-Base, +2.2% on Llama-3.2-3B-Base, and +2.4% on Qwen-Math-1.5B-Instruct. Most critically, while standard GRPO fails to learn under 20% synthetic reward noise, S-GRPO maintains stable learning progress. These results highlight S-GRPO's potential for more robust and effective training of large-scale reasoning models. \footnote{Code and data are available at: https://github.com/shenpeijun0212/S-GRPO
Similar Papers
Noise-corrected GRPO: From Noisy Rewards to Unbiased Gradients
Machine Learning (CS)
Makes AI smarter by fixing bad instructions.
NGRPO: Negative-enhanced Group Relative Policy Optimization
Machine Learning (CS)
Teaches AI to learn from mistakes better.
Multi-Layer GRPO: Enhancing Reasoning and Self-Correction in Large Language Models
Machine Learning (CS)
Teaches computers to fix their own mistakes.