Sentence-level Reward Model can Generalize Better for Aligning LLM from Human Preference
By: Wenjie Qiu , Yi-Chen Li , Xuqin Zhang and more
Potential Business Impact:
Makes AI understand what people like better.
Learning reward models from human preference datasets and subsequently optimizing language models via reinforcement learning has emerged as a fundamental paradigm for aligning LLMs with human preferences. The performance of the reward model plays a crucial role in the effectiveness of alignment. Previous reward models operate at a coarse-grained level, requiring the generation of a complete response to obtain a reward value. The sparse reward may present challenges for downstream reinforcement learning. While recent efforts have attempted to learn token-level reward models, the lack of explicit semantic information makes it difficult to model the credit of every individual token. In this paper, we propose assigning scores to every sentence, introducing an intermediate-grained reward model. By segmenting the complete response into sentences and applying differential operations to reward output at the start and end positions of each sentence, we can effectively model the rewards of sentences. Moreover, a novel attention mechanism is introduced to aggregate the scores of all sentences into a response-level score, which allows it to be trained using the Bradley-Terry model. On common benchmarks, our method outperforms the response-level reward model by 2.7% on RewardBench (for reward modeling evaluation) and surpasses all baselines on AlpacaEval (for alignment evaluation).
Similar Papers
GRAM: A Generative Foundation Reward Model for Reward Generalization
Computation and Language
Teaches AI to learn better from more data.
Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model
Computation and Language
Teaches AI to write better by judging whole sentences.
Beyond Monolithic Rewards: A Hybrid and Multi-Aspect Reward Optimization for MLLM Alignment
Artificial Intelligence
Teaches AI to follow instructions better.