InPO: Inversion Preference Optimization with Reparametrized DDIM for Efficient Diffusion Model Alignment
By: Yunhong Lu , Qichao Wang , Hengyuan Cao and more
Potential Business Impact:
Makes AI art match what people like.
Without using explicit reward, direct preference optimization (DPO) employs paired human preference data to fine-tune generative models, a method that has garnered considerable attention in large language models (LLMs). However, exploration of aligning text-to-image (T2I) diffusion models with human preferences remains limited. In comparison to supervised fine-tuning, existing methods that align diffusion model suffer from low training efficiency and subpar generation quality due to the long Markov chain process and the intractability of the reverse process. To address these limitations, we introduce DDIM-InPO, an efficient method for direct preference alignment of diffusion models. Our approach conceptualizes diffusion model as a single-step generative model, allowing us to fine-tune the outputs of specific latent variables selectively. In order to accomplish this objective, we first assign implicit rewards to any latent variable directly via a reparameterization technique. Then we construct an Inversion technique to estimate appropriate latent variables for preference optimization. This modification process enables the diffusion model to only fine-tune the outputs of latent variables that have a strong correlation with the preference dataset. Experimental results indicate that our DDIM-InPO achieves state-of-the-art performance with just 400 steps of fine-tuning, surpassing all preference aligning baselines for T2I diffusion models in human preference evaluation tasks.
Similar Papers
Smoothed Preference Optimization via ReNoise Inversion for Aligning Diffusion Models with Varied Human Preferences
CV and Pattern Recognition
Makes AI art better by learning what people like.
Towards Self-Improvement of Diffusion Models via Group Preference Optimization
CV and Pattern Recognition
Makes AI pictures better by learning from groups.
Preference-Based Alignment of Discrete Diffusion Models
Machine Learning (CS)
Teaches AI to make better choices without rewards.