APEX: Learning Adaptive Priorities for Multi-Objective Alignment in Vision-Language Generation
By: Dongliang Chen , Xinlin Zhuang , Junjie Xu and more
Multi-objective alignment for text-to-image generation is commonly implemented via static linear scalarization, but fixed weights often fail under heterogeneous rewards, leading to optimization imbalance where models overfit high-variance, high-responsiveness objectives (e.g., OCR) while under-optimizing perceptual goals. We identify two mechanistic causes: variance hijacking, where reward dispersion induces implicit reweighting that dominates the normalized training signal, and gradient conflicts, where competing objectives produce opposing update directions and trigger seesaw-like oscillations. We propose APEX (Adaptive Priority-based Efficient X-objective Alignment), which stabilizes heterogeneous rewards with Dual-Stage Adaptive Normalization and dynamically schedules objectives via P^3 Adaptive Priorities that combine learning potential, conflict penalty, and progress need. On Stable Diffusion 3.5, APEX achieves improved Pareto trade-offs across four heterogeneous objectives, with balanced gains of +1.31 PickScore, +0.35 DeQA, and +0.53 Aesthetics while maintaining competitive OCR accuracy, mitigating the instability of multi-objective alignment.
Similar Papers
Learning to Optimize Multi-Objective Alignment Through Dynamic Reward Weighting
Machine Learning (CS)
Teaches AI to balance many goals at once.
Calibrated Multi-Preference Optimization for Aligning Diffusion Models
CV and Pattern Recognition
Makes AI art better by learning from many opinions.
Self-Improvement Towards Pareto Optimality: Mitigating Preference Conflicts in Multi-Objective Alignment
Machine Learning (CS)
Teaches AI to follow many rules at once.