Intra-Trajectory Consistency for Reward Modeling
By: Chaoyang Zhou , Shunyu Liu , Zengmao Wang and more
Potential Business Impact:
Teaches AI to judge answers better.
Reward models are critical for improving large language models (LLMs), particularly in reinforcement learning from human feedback (RLHF) or inference-time verification. Current reward modeling typically relies on scores of overall responses to learn the outcome rewards for the responses. However, since the response-level scores are coarse-grained supervision signals, the reward model struggles to identify the specific components within a response trajectory that truly correlate with the scores, leading to poor generalization on unseen responses. In this paper, we propose to leverage generation probabilities to establish reward consistency between processes in the response trajectory, which allows the response-level supervisory signal to propagate across processes, thereby providing additional fine-grained signals for reward learning. Building on analysis under the Bayesian framework, we develop an intra-trajectory consistency regularization to enforce that adjacent processes with higher next-token generation probability maintain more consistent rewards. We apply the proposed regularization to the advanced outcome reward model, improving its performance on RewardBench. Besides, we show that the reward model trained with the proposed regularization induces better DPO-aligned policies and achieves better best-of-N (BON) inference-time verification results. Our code is provided in https://github.com/chaoyang101/ICRM.
Similar Papers
ROCM: RLHF on consistency models
Machine Learning (CS)
Makes AI create better things faster.
Accelerating Diffusion Models in Offline RL via Reward-Aware Consistency Trajectory Distillation
Machine Learning (CS)
Makes AI learn faster and better for games.
Sentence-level Reward Model can Generalize Better for Aligning LLM from Human Preference
Computation and Language
Makes AI understand what people like better.