Score: 1

CHARM: Calibrating Reward Models With Chatbot Arena Scores

Published: April 14, 2025 | arXiv ID: 2504.10045v1

By: Xiao Zhu , Chenmien Tan , Pinzhen Chen and more

Potential Business Impact:

Makes AI fairer by fixing its unfair scoring.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Reward models (RMs) play a crucial role in Reinforcement Learning from Human Feedback by serving as proxies for human preferences in aligning large language models. In this paper, we identify a model preference bias in RMs, where they systematically assign disproportionately high scores to responses from certain policy models. This bias distorts ranking evaluations and leads to unfair judgments. To address this issue, we propose a calibration method named CHatbot Arena calibrated Reward Modeling (CHARM) that leverages Elo scores from the Chatbot Arena leaderboard to mitigate RM overvaluation. We also introduce a Mismatch Degree metric to measure this preference bias. Our approach is computationally efficient, requiring only a small preference dataset for continued training of the RM. We conduct extensive experiments on reward model benchmarks and human preference alignment. Results demonstrate that our calibrated RMs (1) achieve improved evaluation accuracy on RM-Bench and the Chat-Hard domain of RewardBench, and (2) exhibit a stronger correlation with human preferences by producing scores more closely aligned with Elo rankings. By mitigating model preference bias, our method provides a generalizable and efficient solution for building fairer and more reliable reward models.

Country of Origin
🇨🇭 Switzerland

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Artificial Intelligence