Mitigating Forgetting Between Supervised and Reinforcement Learning Yields Stronger Reasoners
By: Xiangchi Yuan , Xiang Chen , Tong Yu and more
Potential Business Impact:
Makes AI smarter by learning from mistakes.
Large Language Models (LLMs) show strong reasoning abilities, often amplified by Chain-of-Thought (CoT) prompting and reinforcement learning (RL). Although RL algorithms can substantially improve reasoning, they struggle to expand reasoning boundaries because they learn from their own reasoning trajectories rather than acquiring external knowledge. Supervised fine-tuning (SFT) offers complementary benefits but typically requires large-scale data and risks overfitting. Recent attempts to combine SFT and RL face three main challenges: data inefficiency, algorithm-specific designs, and catastrophic forgetting. We propose a plug-and-play framework that dynamically integrates SFT into RL by selecting challenging examples for SFT. This approach reduces SFT data requirements and remains agnostic to the choice of RL or SFT algorithm. To mitigate catastrophic forgetting of RL-acquired skills during SFT, we select high-entropy tokens for loss calculation and freeze parameters identified as critical for RL. Our method achieves state-of-the-art (SoTA) reasoning performance using only 1.5% of the SFT data and 20.4% of the RL data used by prior SoTA, providing an efficient and plug-and-play solution for combining SFT and RL in reasoning post-training.
Similar Papers
Beyond SFT: Reinforcement Learning for Safer Large Reasoning Models with Better Reasoning Ability
Computation and Language
Makes smart computers think safely and correctly.
Beyond Two-Stage Training: Cooperative SFT and RL for LLM Reasoning
Computation and Language
Teaches computers to learn better and faster.
Empowering Lightweight MLLMs with Reasoning via Long CoT SFT
CV and Pattern Recognition
Teaches small AI to think better with examples.