Multi-Layer GRPO: Enhancing Reasoning and Self-Correction in Large Language Models
By: Fei Ding , Baiqiao Wang , Zijian Zeng and more
Potential Business Impact:
Teaches computers to fix their own mistakes.
The Group Relative Policy Optimization (GRPO) algorithm has demonstrated considerable success in enhancing the reasoning capabilities of large language models (LLMs), as evidenced by DeepSeek-R1. However, the absence of intermediate supervision in GRPO frequently leads to inefficient exploration dynamics. A single error in a complex reasoning chain can invalidate the entire solution, resulting in abrupt reward vanishing and compromising training stability.To address these challenges, we propose MGRPO (Multi-layer GRPO). MGRPO operates in two layers: the first layer employs standard GRPO to generate an initial response. This response, along with the original query, is then fed into a second-layer GRPO process. This second layer is specifically trained to identify and correct errors in the initial response, effectively creating a self-correction loop. This mechanism provides implicit process-level supervision by rewarding successful error correction, without requiring an explicit, densely-annotated reward model. Experimental results on several mathematical reasoning benchmarks demonstrate that MGRPO significantly outperforms standard GRPO, achieving superior performance by fostering both reasoning and self-correction abilities.
Similar Papers
GRPO-RM: Fine-Tuning Representation Models via GRPO-Driven Reinforcement Learning
Machine Learning (CS)
Teaches AI to learn better from data.
Stepwise Guided Policy Optimization: Coloring your Incorrect Reasoning in GRPO
Machine Learning (CS)
Helps AI learn from mistakes, not just successes.
From Reasoning to Code: GRPO Optimization for Underrepresented Languages
Machine Learning (CS)
Teaches computers to write code for rare languages.