Score: 1

GRPO-LEAD: A Difficulty-Aware Reinforcement Learning Approach for Concise Mathematical Reasoning in Language Models

Published: April 13, 2025 | arXiv ID: 2504.09696v2

By: Jixiao Zhang, Chunsheng Zuo

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

Makes math problems easier for computers to solve.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Group Relative Policy Optimization (GRPO), which is widely adopted by R1-like reasoning models, has advanced mathematical reasoning. Nevertheless, GRPO faces challenges in reward sparsity, verbosity, and inadequate focus on problem difficulty. We propose GRPO-LEAD, enhancing GRPO with: (1) length-regularized rewards to encourage conciseness while maintaining accuracy; (2) explicit penalties for incorrect solutions to improve model precision; and (3) difficulty-aware advantage reweighting for robust generalization on challenging problems. Comprehensive evaluations demonstrate that GRPO-LEAD significantly improves reasoning accuracy, conciseness, and efficiency. Our approach achieves state-of-the-art performance for 14B-scale models, underscoring the synergy of our methods with appropriate model scale and high-quality data. Our source code, generated dataset, and models are available at https://github.com/aeroplanepaper/GRPO-LEAD.

Country of Origin
🇺🇸 United States

Page Count
13 pages

Category
Computer Science:
Computation and Language