Score: 2

Selective Expert Guidance for Effective and Diverse Exploration in Reinforcement Learning of LLMs

Published: October 5, 2025 | arXiv ID: 2510.04140v1

By: Zishang Jiang , Jinyi Han , Tingyun Li and more

Potential Business Impact:

Teaches AI to think better by guiding key choices.

Business Areas:
A/B Testing Data and Analytics

Reinforcement Learning with Verifiable Rewards (RLVR) has become a widely adopted technique for enhancing the reasoning ability of Large Language Models (LLMs). However, the effectiveness of RLVR strongly depends on the capability of base models. This issue arises because it requires the model to have sufficient capability to perform high-quality exploration, which involves both effectiveness and diversity. Unfortunately, existing methods address this issue by imitating expert trajectories, which improve effectiveness but neglect diversity. To address this, we argue that the expert only needs to provide guidance only at critical decision points rather than the entire reasoning path. Based on this insight, we propose MENTOR: Mixed-policy Expert Navigation for Token-level Optimization of Reasoning, a framework that provides expert guidance only at critical decision points to perform effective and diverse exploration in RLVR. Extensive experiments show that MENTOR enables models capture the essence of expert strategies rather than surface imitation, thereby performing high-quality exploration and achieving superior overall performance. Our code is available online.

Country of Origin
🇨🇳 China


Page Count
19 pages

Category
Computer Science:
Artificial Intelligence