Proximal Supervised Fine-Tuning
By: Wenhong Zhu , Ruobing Xie , Rui Wang and more
Potential Business Impact:
Keeps AI smart when learning new things.
Supervised fine-tuning (SFT) of foundation models often leads to poor generalization, where prior capabilities deteriorate after tuning on new tasks or domains. Inspired by trust-region policy optimization (TRPO) and proximal policy optimization (PPO) in reinforcement learning (RL), we propose Proximal SFT (PSFT). This fine-tuning objective incorporates the benefits of trust-region, effectively constraining policy drift during SFT while maintaining competitive tuning. By viewing SFT as a special case of policy gradient methods with constant positive advantages, we derive PSFT that stabilizes optimization and leads to generalization, while leaving room for further optimization in subsequent post-training stages. Experiments across mathematical and human-value domains show that PSFT matches SFT in-domain, outperforms it in out-of-domain generalization, remains stable under prolonged training without causing entropy collapse, and provides a stronger foundation for the subsequent optimization.
Similar Papers
Improved Supervised Fine-Tuning for Large Language Models to Mitigate Catastrophic Forgetting
Computation and Language
Keeps AI smart while teaching it new tricks.
On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification
Machine Learning (CS)
Makes AI learn better from examples.
Beyond Imitation: Recovering Dense Rewards from Demonstrations
Machine Learning (CS)
Teaches computers to learn rewards from examples.