Mitigating Length Bias in RLHF through a Causal Lens
By: Hyeonji Kim, Sujeong Oh, Sanghack Lee
Potential Business Impact:
Makes AI write shorter, better answers.
Reinforcement learning from human feedback (RLHF) is widely used to align large language models (LLMs) with human preferences. However, RLHF-trained reward models often exhibit length bias -- a systematic tendency to favor longer responses by conflating verbosity with quality. We propose a causal framework for analyzing and mitigating length bias in RLHF reward modeling. Central to our approach is a counterfactual data augmentation method that generates response pairs designed to isolate content quality from verbosity. These counterfactual examples are then used to train the reward model, enabling it to assess responses based on content quality independently of verbosity. Specifically, we construct (1) length-divergent pairs with similar content and (2) content-divergent pairs of similar length. Empirical evaluations show that our method reduces length bias in reward assignment and leads to more concise, content-focused outputs from the policy model. These findings demonstrate that the proposed approach effectively reduces length bias and improves the robustness and content sensitivity of reward modeling in RLHF pipelines.
Similar Papers
Counterfactual Reward Model Training for Bias Mitigation in Multimodal Reinforcement Learning
Machine Learning (CS)
Makes AI fairer by removing hidden biases.
Robust Reinforcement Learning from Human Feedback for Large Language Models Fine-Tuning
Machine Learning (Stat)
Makes AI understand what people want better.
Word Overuse and Alignment in Large Language Models: The Influence of Learning from Human Feedback
Computation and Language
Fixes AI's wordy and repetitive writing habits.