Score: 1

Mitigating Length Bias in RLHF through a Causal Lens

Published: November 16, 2025 | arXiv ID: 2511.12573v1

By: Hyeonji Kim, Sujeong Oh, Sanghack Lee

Potential Business Impact:

Makes AI write shorter, better answers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Reinforcement learning from human feedback (RLHF) is widely used to align large language models (LLMs) with human preferences. However, RLHF-trained reward models often exhibit length bias -- a systematic tendency to favor longer responses by conflating verbosity with quality. We propose a causal framework for analyzing and mitigating length bias in RLHF reward modeling. Central to our approach is a counterfactual data augmentation method that generates response pairs designed to isolate content quality from verbosity. These counterfactual examples are then used to train the reward model, enabling it to assess responses based on content quality independently of verbosity. Specifically, we construct (1) length-divergent pairs with similar content and (2) content-divergent pairs of similar length. Empirical evaluations show that our method reduces length bias in reward assignment and leads to more concise, content-focused outputs from the policy model. These findings demonstrate that the proposed approach effectively reduces length bias and improves the robustness and content sensitivity of reward modeling in RLHF pipelines.

Repos / Data Links

Page Count
28 pages

Category
Computer Science:
Computation and Language