ProxT2I: Efficient Reward-Guided Text-to-Image Generation via Proximal Diffusion
By: Zhenghan Fang , Jian Zheng , Qiaozi Gao and more
Potential Business Impact:
Creates better pictures from words, faster.
Diffusion models have emerged as a dominant paradigm for generative modeling across a wide range of domains, including prompt-conditional generation. The vast majority of samplers, however, rely on forward discretization of the reverse diffusion process and use score functions that are learned from data. Such forward and explicit discretizations can be slow and unstable, requiring a large number of sampling steps to produce good-quality samples. In this work we develop a text-to-image (T2I) diffusion model based on backward discretizations, dubbed ProxT2I, relying on learned and conditional proximal operators instead of score functions. We further leverage recent advances in reinforcement learning and policy optimization to optimize our samplers for task-specific rewards. Additionally, we develop a new large-scale and open-source dataset comprising 15 million high-quality human images with fine-grained captions, called LAION-Face-T2I-15M, for training and evaluation. Our approach consistently enhances sampling efficiency and human-preference alignment compared to score-based baselines, and achieves results on par with existing state-of-the-art and open-source text-to-image models while requiring lower compute and smaller model size, offering a lightweight yet performant solution for human text-to-image generation.
Similar Papers
Instant Preference Alignment for Text-to-Image Diffusion Models
CV and Pattern Recognition
Creates images that match your exact ideas.
Training-Free Diffusion Priors for Text-to-Image Generation via Optimization-based Visual Inversion
CV and Pattern Recognition
Makes AI create better pictures from words.
Reusing Computation in Text-to-Image Diffusion for Efficient Generation of Image Sets
CV and Pattern Recognition
Makes AI art faster and cheaper.