Rectifying Shortcut Behaviors in Preference-based Reward Learning
By: Wenqian Ye, Guangtao Zheng, Aidong Zhang
Potential Business Impact:
Teaches AI to follow instructions, not cheat.
In reinforcement learning from human feedback, preference-based reward models play a central role in aligning large language models to human-aligned behavior. However, recent studies show that these models are prone to reward hacking and often fail to generalize well due to over-optimization. They achieve high reward scores by exploiting shortcuts, that is, exploiting spurious features (e.g., response verbosity, agreeable tone, or sycophancy) that correlate with human preference labels in the training data rather than genuinely reflecting the intended objectives. In this paper, instead of probing these issues one at a time, we take a broader view of the reward hacking problem as shortcut behaviors and introduce a principled yet flexible approach to mitigate shortcut behaviors in preference-based reward learning. Inspired by the invariant theory in the kernel perspective, we propose Preference-based Reward Invariance for Shortcut Mitigation (PRISM), which learns group-invariant kernels with feature maps in a closed-form learning objective. Experimental results in several benchmarks show that our method consistently improves the accuracy of the reward model on diverse out-of-distribution tasks and reduces the dependency on shortcuts in downstream policy models, establishing a robust framework for preference-based alignment.
Similar Papers
Preference Learning for AI Alignment: a Causal Perspective
Artificial Intelligence
Makes AI understand what people truly want.
Debiasing Reward Models by Representation Learning with Guarantees
Machine Learning (CS)
Makes AI understand what you really mean.
Repairing Reward Functions with Human Feedback to Mitigate Reward Hacking
Artificial Intelligence
Fixes computer goals to match what people want.