Reusing Computation in Text-to-Image Diffusion for Efficient Generation of Image Sets
By: Dale Decatur , Thibault Groueix , Wang Yifan and more
Potential Business Impact:
Makes AI art faster and cheaper.
Text-to-image diffusion models enable high-quality image generation but are computationally expensive. While prior work optimizes per-inference efficiency, we explore an orthogonal approach: reducing redundancy across correlated prompts. Our method leverages the coarse-to-fine nature of diffusion models, where early denoising steps capture shared structures among similar prompts. We propose a training-free approach that clusters prompts based on semantic similarity and shares computation in early diffusion steps. Experiments show that for models trained conditioned on image embeddings, our approach significantly reduces compute cost while improving image quality. By leveraging UnClip's text-to-image prior, we enhance diffusion step allocation for greater efficiency. Our method seamlessly integrates with existing pipelines, scales with prompt sets, and reduces the environmental and financial burden of large-scale text-to-image generation. Project page: https://ddecatur.github.io/hierarchical-diffusion/
Similar Papers
Cost-Aware Routing for Efficient Text-To-Image Generation
CV and Pattern Recognition
Makes AI art faster by choosing the right tool.
Beyond the Noise: Aligning Prompts with Latent Representations in Diffusion Models
CV and Pattern Recognition
Finds bad AI pictures while they're still being made.
DiffusionX: Efficient Edge-Cloud Collaborative Image Generation with Multi-Round Prompt Evolution
CV and Pattern Recognition
Makes AI art faster and better.