Behavior Injection: Preparing Language Models for Reinforcement Learning
By: Zhepeng Cen , Yihang Yao , William Han and more
Potential Business Impact:
Makes AI smarter at solving problems.
Reinforcement fine-tuning (RFT) has emerged as a powerful post-training technique to incentivize the reasoning ability of large language models (LLMs). However, LLMs can respond very inconsistently to RFT: some show substantial performance gains, while others plateau or even degrade. To understand this divergence, we analyze the per-step influence of the RL objective and identify two key conditions for effective post-training: (1) RL-informative rollout accuracy, and (2) strong data co-influence, which quantifies how much the training data affects performance on other samples. Guided by these insights, we propose behavior injection, a task-agnostic data-augmentation scheme applied prior to RL. Behavior injection enriches the supervised finetuning (SFT) data by seeding exploratory and exploitative behaviors, effectively making the model more RL-ready. We evaluate our method across two reasoning benchmarks with multiple base models. The results demonstrate that our theoretically motivated augmentation can significantly increases the performance gain from RFT over the pre-RL model.
Similar Papers
Blending Supervised and Reinforcement Fine-Tuning with Prefix Sampling
Machine Learning (CS)
Teaches computers to learn better from examples and trying.
Learning What Reinforcement Learning Can't: Interleaved Online Fine-Tuning for Hardest Questions
Artificial Intelligence
Teaches computers new things to solve harder problems.
UFT: Unifying Supervised and Reinforcement Fine-Tuning
Machine Learning (CS)
Makes computers think better and learn faster.