Score: 0

Multi-dimensional Preference Alignment by Conditioning Reward Itself

Published: December 11, 2025 | arXiv ID: 2512.10237v1

By: Jiho Jang , Jinyoung Kim , Kyungjune Baek and more

Potential Business Impact:

Teaches AI to create better pictures by understanding different feedback.

Business Areas:
Personalization Commerce and Shopping

Reinforcement Learning from Human Feedback has emerged as a standard for aligning diffusion models. However, we identify a fundamental limitation in the standard DPO formulation because it relies on the Bradley-Terry model to aggregate diverse evaluation axes like aesthetic quality and semantic alignment into a single scalar reward. This aggregation creates a reward conflict where the model is forced to unlearn desirable features of a specific dimension if they appear in a globally non-preferred sample. To address this issue, we propose Multi Reward Conditional DPO (MCDPO). This method resolves reward conflicts by introducing a disentangled Bradley-Terry objective. MCDPO explicitly injects a preference outcome vector as a condition during training, which allows the model to learn the correct optimization direction for each reward axis independently within a single network. We further introduce dimensional reward dropout to ensure balanced optimization across dimensions. Extensive experiments on Stable Diffusion 1.5 and SDXL demonstrate that MCDPO achieves superior performance on benchmarks. Notably, our conditional framework enables dynamic and multiple-axis control at inference time using Classifier Free Guidance to amplify specific reward dimensions without additional training or external reward models.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition