Score: 1

On the Non-decoupling of Supervised Fine-tuning and Reinforcement Learning in Post-training

Published: January 12, 2026 | arXiv ID: 2601.07389v1

By: Xueyan Niu , Bo Bai , Wei Han and more

BigTech Affiliations: Huawei

Potential Business Impact:

Mixing AI training methods hurts its smartness.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Post-training of large language models routinely interleaves supervised fine-tuning (SFT) with reinforcement learning (RL). These two methods have different objectives: SFT minimizes the cross-entropy loss between model outputs and expert responses, while RL maximizes reward signals derived from human preferences or rule-based verifiers. Modern reasoning models have widely adopted the practice of alternating SFT and RL training. However, there is no theoretical account of whether they can be decoupled. We prove that decoupling is impossible in either order: (1) SFT-then-RL coupling: RL increases SFT loss under SFT optimality and (2) RL-then-SFT coupling: SFT lowers the reward achieved by RL. Experiments on Qwen3-0.6B confirm the predicted degradation, verifying that SFT and RL cannot be separated without loss of prior performance in the post-training

Country of Origin
πŸ‡¨πŸ‡³ China

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)