Reward Models are Metrics in a Trench Coat
By: Sebastian Gehrmann
Potential Business Impact:
Makes AI better at judging its own answers.
The emergence of reinforcement learning in post-training of large language models has sparked significant interest in reward models. Reward models assess the quality of sampled model outputs to generate training signals. This task is also performed by evaluation metrics that monitor the performance of an AI model. We find that the two research areas are mostly separate, leading to redundant terminology and repeated pitfalls. Common challenges include susceptibility to spurious correlations, impact on downstream reward hacking, methods to improve data quality, and approaches to meta-evaluation. Our position paper argues that a closer collaboration between the fields can help overcome these issues. To that end, we show how metrics outperform reward models on specific tasks and provide an extensive survey of the two areas. Grounded in this survey, we point to multiple research topics in which closer alignment can improve reward models and metrics in areas such as preference elicitation methods, avoidance of spurious correlations and reward hacking, and calibration-aware meta-evaluation.
Similar Papers
RewardBench 2: Advancing Reward Model Evaluation
Computation and Language
Tests AI to make it better at following instructions.
Reward Models in Deep Reinforcement Learning: A Survey
Machine Learning (CS)
Teaches computers to learn tasks by rewarding good actions.
Debiasing Reward Models by Representation Learning with Guarantees
Machine Learning (CS)
Makes AI understand what you really mean.