RewardBench 2: Advancing Reward Model Evaluation
By: Saumya Malik , Valentina Pyatkin , Sander Land and more
Potential Business Impact:
Tests AI to make it better at following instructions.
Reward models are used throughout the post-training of language models to capture nuanced signals from preference data and provide a training target for optimization across instruction following, reasoning, safety, and more domains. The community has begun establishing best practices for evaluating reward models, from the development of benchmarks that test capabilities in specific skill areas to others that test agreement with human preferences. At the same time, progress in evaluation has not been mirrored by the effectiveness of reward models in downstream tasks -- simpler direct alignment algorithms are reported to work better in many cases. This paper introduces RewardBench 2, a new multi-skill reward modeling benchmark designed to bring new, challenging data for accuracy-based reward model evaluation -- models score about 20 points on average lower on RewardBench 2 compared to the first RewardBench -- while being highly correlated with downstream performance. Compared to most other benchmarks, RewardBench 2 sources new human prompts instead of existing prompts from downstream evaluations, facilitating more rigorous evaluation practices. In this paper, we describe our benchmark construction process and report how existing models perform on it, while quantifying how performance on the benchmark correlates with downstream use of the models in both inference-time scaling algorithms, like best-of-N sampling, and RLHF training algorithms like proximal policy optimization.
Similar Papers
Reward Models are Metrics in a Trench Coat
Computation and Language
Makes AI better at judging its own answers.
A Systematic Analysis of Base Model Choice for Reward Modeling
Computation and Language
Improves AI writing by picking the best starting AI.
Sentence-level Reward Model can Generalize Better for Aligning LLM from Human Preference
Computation and Language
Makes AI understand what people like better.