Score: 0

On the Effect of Negative Gradient in Group Relative Deep Reinforcement Optimization

Published: May 24, 2025 | arXiv ID: 2505.18830v1

By: Wenlong Deng , Yi Ren , Muchen Li and more

Potential Business Impact:

Fixes AI mistakes by changing how it learns.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Reinforcement learning (RL) has become popular in enhancing the reasoning capabilities of large language models (LLMs), with Group Relative Policy Optimization (GRPO) emerging as a widely used algorithm in recent systems. Despite GRPO's widespread adoption, we identify a previously unrecognized phenomenon we term Lazy Likelihood Displacement (LLD), wherein the likelihood of correct responses marginally increases or even decreases during training. This behavior mirrors a recently discovered misalignment issue in Direct Preference Optimization (DPO), attributed to the influence of negative gradients. We provide a theoretical analysis of GRPO's learning dynamic, identifying the source of LLD as the naive penalization of all tokens in incorrect responses with the same strength. To address this, we develop a method called NTHR, which downweights penalties on tokens contributing to the LLD. Unlike prior DPO-based approaches, NTHR takes advantage of GRPO's group-based structure, using correct responses as anchors to identify influential tokens. Experiments on math reasoning benchmarks demonstrate that NTHR effectively mitigates LLD, yielding consistent performance gains across models ranging from 0.5B to 3B parameters.

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)