RIFT: Repurposing Negative Samples via Reward-Informed Fine-Tuning
By: Zehua Liu , Shuqi Liu , Tao Zhong and more
While Supervised Fine-Tuning (SFT) and Rejection Sampling Fine-Tuning (RFT) are standard for LLM alignment, they either rely on costly expert data or discard valuable negative samples, leading to data inefficiency. To address this, we propose Reward Informed Fine-Tuning (RIFT), a simple yet effective framework that utilizes all self-generated samples. Unlike the hard thresholding of RFT, RIFT repurposes negative trajectories, reweighting the loss with scalar rewards to learn from both the positive and negative trajectories from the model outputs. To overcome the training collapse caused by naive reward integration, where direct multiplication yields an unbounded loss, we introduce a stabilized loss formulation that ensures numerical robustness and optimization efficiency. Extensive experiments on mathematical benchmarks across various base models show that RIFT consistently outperforms RFT. Our results demonstrate that RIFT is a robust and data-efficient alternative for alignment using mixed-quality, self-generated data.
Similar Papers
RIFT: A Scalable Methodology for LLM Accelerator Fault Assessment using Reinforcement Learning
Artificial Intelligence
Finds AI chip flaws faster, saving money.
Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting
Machine Learning (CS)
Keeps AI smart while teaching it new things.
Blending Supervised and Reinforcement Fine-Tuning with Prefix Sampling
Machine Learning (CS)
Teaches computers to learn better from examples and trying.