Score: 0

MEML-GRPO: Heterogeneous Multi-Expert Mutual Learning for RLVR Advancement

Published: August 13, 2025 | arXiv ID: 2508.09670v1

By: Weitao Jia , Jinghui Lu , Haiyang Yu and more

Potential Business Impact:

Helps AI learn better by sharing knowledge.

Recent advances demonstrate that reinforcement learning with verifiable rewards (RLVR) significantly enhances the reasoning capabilities of large language models (LLMs). However, standard RLVR faces challenges with reward sparsity, where zero rewards from consistently incorrect candidate answers provide no learning signal, particularly in challenging tasks. To address this, we propose Multi-Expert Mutual Learning GRPO (MEML-GRPO), an innovative framework that utilizes diverse expert prompts as system prompts to generate a broader range of responses, substantially increasing the likelihood of identifying correct solutions. Additionally, we introduce an inter-expert mutual learning mechanism that facilitates knowledge sharing and transfer among experts, further boosting the model's performance through RLVR. Extensive experiments across multiple reasoning benchmarks show that MEML-GRPO delivers significant improvements, achieving an average performance gain of 4.89% with Qwen and 11.33% with Llama, effectively overcoming the core limitations of traditional RLVR methods.

Page Count
10 pages

Category
Computer Science:
Artificial Intelligence