CHARM: Calibrating Reward Models With Chatbot Arena Scores
By: Xiao Zhu , Chenmien Tan , Pinzhen Chen and more
Potential Business Impact:
Makes AI fairer by fixing its unfair scoring.
Reward models (RMs) play a crucial role in Reinforcement Learning from Human Feedback by serving as proxies for human preferences in aligning large language models. In this paper, we identify a model preference bias in RMs, where they systematically assign disproportionately high scores to responses from certain policy models. This bias distorts ranking evaluations and leads to unfair judgments. To address this issue, we propose a calibration method named CHatbot Arena calibrated Reward Modeling (CHARM) that leverages Elo scores from the Chatbot Arena leaderboard to mitigate RM overvaluation. We also introduce a Mismatch Degree metric to measure this preference bias. Our approach is computationally efficient, requiring only a small preference dataset for continued training of the RM. We conduct extensive experiments on reward model benchmarks and human preference alignment. Results demonstrate that our calibrated RMs (1) achieve improved evaluation accuracy on RM-Bench and the Chat-Hard domain of RewardBench, and (2) exhibit a stronger correlation with human preferences by producing scores more closely aligned with Elo rankings. By mitigating model preference bias, our method provides a generalizable and efficient solution for building fairer and more reliable reward models.
Similar Papers
Improving Your Model Ranking on Chatbot Arena by Vote Rigging
Computation and Language
Makes chatbot rankings easier to cheat.
RoleRMBench & RoleRM: Towards Reward Modeling for Profile-Based Role Play in Dialogue Systems
Computation and Language
Makes AI better at pretending to be characters.
The Reward Model Selection Crisis in Personalized Alignment
Artificial Intelligence
Helps AI learn what you really want.