Score: 1

SuperRL: Reinforcement Learning with Supervision to Boost Language Model Reasoning

Published: June 1, 2025 | arXiv ID: 2506.01096v2

By: Yihao Liu , Shuocheng Li , Lang Cao and more

BigTech Affiliations: Microsoft

Potential Business Impact:

Teaches computers to learn better from examples.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models are increasingly used for complex reasoning tasks where high-quality offline data such as expert-annotated solutions and distilled reasoning traces are often available. However, in environments with sparse rewards, reinforcement learning struggles to sample successful trajectories, leading to inefficient learning. At the same time, these offline trajectories that represent correct reasoning paths are not utilized by standard on-policy reinforcement learning methods. We introduce SuperRL, a unified training framework that adaptively alternates between RL and SFT. Whenever every rollout for a given instance receives zero reward, indicating the absence of a learning signal, SuperRL falls back to SFT on the curated offline data. Extensive experiments across diverse reasoning benchmarks show that SuperRL surpasses vanilla RL by delivering higher sample efficiency, stronger generalization, and improved robustness under sparse rewards.

Country of Origin
🇺🇸 United States

Page Count
27 pages

Category
Computer Science:
Artificial Intelligence