Targeting Misalignment: A Conflict-Aware Framework for Reward-Model-based LLM Alignment
By: Zixuan Liu , Siavash H. Khajavi , Guangkai Jiang and more
Reward-model-based fine-tuning is a central paradigm in aligning Large Language Models with human preferences. However, such approaches critically rely on the assumption that proxy reward models accurately reflect intended supervision, a condition often violated due to annotation noise, bias, or limited coverage. This misalignment can lead to undesirable behaviors, where models optimize for flawed signals rather than true human values. In this paper, we investigate a novel framework to identify and mitigate such misalignment by treating the fine-tuning process as a form of knowledge integration. We focus on detecting instances of proxy-policy conflicts, cases where the base model strongly disagrees with the proxy. We argue that such conflicts often signify areas of shared ignorance, where neither the policy nor the reward model possesses sufficient knowledge, making them especially susceptible to misalignment. To this end, we propose two complementary metrics for identifying these conflicts: a localized Proxy-Policy Alignment Conflict Score (PACS) and a global Kendall-Tau Distance measure. Building on this insight, we design an algorithm named Selective Human-in-the-loop Feedback via Conflict-Aware Sampling (SHF-CAS) that targets high-conflict QA pairs for additional feedback, refining both the reward model and policy efficiently. Experiments on two alignment tasks demonstrate that our approach enhances general alignment performance, even when trained with a biased proxy reward. Our work provides a new lens for interpreting alignment failures and offers a principled pathway for targeted refinement in LLM training.
Similar Papers
LLM Misalignment via Adversarial RLHF Platforms
Machine Learning (CS)
Makes AI say bad things when trained.
The Realignment Problem: When Right becomes Wrong in LLMs
Computation and Language
Updates AI to follow new rules without breaking.
Two Minds Better Than One: Collaborative Reward Modeling for LLM Alignment
Machine Learning (CS)
Cleans AI's learning data for better results.