MENTOR: A Reinforcement Learning Framework for Model Enhancement via Teacher-Optimized Rewards in Small Models
By: ChangSu Choi , Hoyun Song , Dongyeon Kim and more
Potential Business Impact:
Makes small AI learn complex tasks better.
Distilling the tool-using capabilities of large language models (LLMs) into smaller, more efficient small language models (SLMs) is a key challenge for their practical application. The predominant approach, supervised fine-tuning (SFT), suffers from poor generalization as it trains models to imitate a static set of teacher trajectories rather than learn a robust methodology. While reinforcement learning (RL) offers an alternative, the standard RL using sparse rewards fails to effectively guide SLMs, causing them to struggle with inefficient exploration and adopt suboptimal strategies. To address these distinct challenges, we propose MENTOR, a framework that synergistically combines RL with teacher-guided distillation. Instead of simple imitation, MENTOR employs an RL-based process to learn a more generalizable policy through exploration. In addition, to solve the problem of reward sparsity, it uses a teacher's reference trajectory to construct a dense, composite teacher-guided reward that provides fine-grained guidance. Extensive experiments demonstrate that MENTOR significantly improves the cross-domain generalization and strategic competence of SLMs compared to both SFT and standard sparse-reward RL baselines.
Similar Papers
Selective Expert Guidance for Effective and Diverse Exploration in Reinforcement Learning of LLMs
Artificial Intelligence
Teaches AI to think better by guiding key choices.
MENTOR: A Metacognition-Driven Self-Evolution Framework for Uncovering and Mitigating Implicit Risks in LLMs on Domain Tasks
Artificial Intelligence
Teaches AI to think about its mistakes and improve.
Supervised Reinforcement Learning: From Expert Trajectories to Step-wise Reasoning
Computation and Language
Teaches computers to solve hard problems step-by-step.