MEML-GRPO: Heterogeneous Multi-Expert Mutual Learning for RLVR Advancement
By: Weitao Jia , Jinghui Lu , Haiyang Yu and more
Potential Business Impact:
Helps AI learn better by sharing knowledge.
Recent advances demonstrate that reinforcement learning with verifiable rewards (RLVR) significantly enhances the reasoning capabilities of large language models (LLMs). However, standard RLVR faces challenges with reward sparsity, where zero rewards from consistently incorrect candidate answers provide no learning signal, particularly in challenging tasks. To address this, we propose Multi-Expert Mutual Learning GRPO (MEML-GRPO), an innovative framework that utilizes diverse expert prompts as system prompts to generate a broader range of responses, substantially increasing the likelihood of identifying correct solutions. Additionally, we introduce an inter-expert mutual learning mechanism that facilitates knowledge sharing and transfer among experts, further boosting the model's performance through RLVR. Extensive experiments across multiple reasoning benchmarks show that MEML-GRPO delivers significant improvements, achieving an average performance gain of 4.89% with Qwen and 11.33% with Llama, effectively overcoming the core limitations of traditional RLVR methods.
Similar Papers
R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Policy Optimization
Artificial Intelligence
Teaches AI to think through problems, not just copy.
Selective Expert Guidance for Effective and Diverse Exploration in Reinforcement Learning of LLMs
Artificial Intelligence
Teaches AI to think better by guiding key choices.
OThink-MR1: Stimulating multimodal generalized reasoning capabilities via dynamic reinforcement learning
Machine Learning (CS)
Teaches AI to understand and reason better.