When Distance Distracts: Representation Distance Bias in BT-Loss for Reward Models
By: Tong Xie , Andrew Bai , Yuanhao Ban and more
Potential Business Impact:
Fixes AI learning to better judge good vs. bad answers.
Reward models are central to Large Language Model (LLM) alignment within the framework of RLHF. The standard objective used in reward modeling is the Bradley-Terry (BT) loss, which learns from pairwise data consisting of a pair of chosen and rejected responses. In this work, we analyze the per-sample gradient of BT-loss and show that its norm scales with two distinct components: (1) the difference in predicted rewards between chosen and rejected responses, which reflects the prediction error, and critically, (2) representation distance between the pair measured in the output space of the final layer. While the first term captures the intended training signal, we show that the second term can significantly impact the update magnitude and misalign learning. Specifically, pairs with small representation distance often receive vanishingly weak updates, even when misranked, while pairs with large distance receive disproportionately strong updates. This leads to gradients from large-distance pairs to overshadow those from small-distance pairs, where fine-grained distinctions are especially important. To overcome this limitation, we propose NormBT, an adaptive pair-wise normalization scheme that balances representation-driven effects and focuses learning signals on prediction error. NormBT is a lightweight, drop-in integration to BT loss with negligible overhead. Across various LLM backbones and datasets, NormBT improves reward model performance consistently, with notable gains of over 5% on the Reasoning category of RewardBench, which contains numerous small-distance pairs. This work reveals a key limitation in the widely used BT objective and provides a simple, effective correction.
Similar Papers
On the Robustness of Reward Models for Language Model Alignment
Computation and Language
Makes AI better at picking good answers.
APLOT: Robust Reward Modeling via Adaptive Preference Learning with Optimal Transport
Machine Learning (CS)
Makes AI understand what people like better.
Debiasing Reward Models by Representation Learning with Guarantees
Machine Learning (CS)
Makes AI understand what you really mean.