Score: 4

ProxT2I: Efficient Reward-Guided Text-to-Image Generation via Proximal Diffusion

Published: November 24, 2025 | arXiv ID: 2511.18742v1

By: Zhenghan Fang , Jian Zheng , Qiaozi Gao and more

BigTech Affiliations: Johns Hopkins University Amazon

Potential Business Impact:

Creates better pictures from words, faster.

Business Areas:
Visual Search Internet Services

Diffusion models have emerged as a dominant paradigm for generative modeling across a wide range of domains, including prompt-conditional generation. The vast majority of samplers, however, rely on forward discretization of the reverse diffusion process and use score functions that are learned from data. Such forward and explicit discretizations can be slow and unstable, requiring a large number of sampling steps to produce good-quality samples. In this work we develop a text-to-image (T2I) diffusion model based on backward discretizations, dubbed ProxT2I, relying on learned and conditional proximal operators instead of score functions. We further leverage recent advances in reinforcement learning and policy optimization to optimize our samplers for task-specific rewards. Additionally, we develop a new large-scale and open-source dataset comprising 15 million high-quality human images with fine-grained captions, called LAION-Face-T2I-15M, for training and evaluation. Our approach consistently enhances sampling efficiency and human-preference alignment compared to score-based baselines, and achieves results on par with existing state-of-the-art and open-source text-to-image models while requiring lower compute and smaller model size, offering a lightweight yet performant solution for human text-to-image generation.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
28 pages

Category
Computer Science:
CV and Pattern Recognition