Multi-dimensional Preference Alignment by Conditioning Reward Itself
By: Jiho Jang , Jinyoung Kim , Kyungjune Baek and more
Potential Business Impact:
Teaches AI to create better pictures by understanding different feedback.
Reinforcement Learning from Human Feedback has emerged as a standard for aligning diffusion models. However, we identify a fundamental limitation in the standard DPO formulation because it relies on the Bradley-Terry model to aggregate diverse evaluation axes like aesthetic quality and semantic alignment into a single scalar reward. This aggregation creates a reward conflict where the model is forced to unlearn desirable features of a specific dimension if they appear in a globally non-preferred sample. To address this issue, we propose Multi Reward Conditional DPO (MCDPO). This method resolves reward conflicts by introducing a disentangled Bradley-Terry objective. MCDPO explicitly injects a preference outcome vector as a condition during training, which allows the model to learn the correct optimization direction for each reward axis independently within a single network. We further introduce dimensional reward dropout to ensure balanced optimization across dimensions. Extensive experiments on Stable Diffusion 1.5 and SDXL demonstrate that MCDPO achieves superior performance on benchmarks. Notably, our conditional framework enables dynamic and multiple-axis control at inference time using Classifier Free Guidance to amplify specific reward dimensions without additional training or external reward models.
Similar Papers
Beyond Reward Margin: Rethinking and Resolving Likelihood Displacement in Diffusion Models via Video Generation
CV and Pattern Recognition
Makes AI videos better by learning what people like.
Difficulty-Based Preference Data Selection by DPO Implicit Reward Gap
Computation and Language
Chooses smart examples to teach AI better.
Beyond Single-Reward: Multi-Pair, Multi-Perspective Preference Optimization for Machine Translation
Computation and Language
Teaches computers to translate languages better.