Diverse Text-to-Image Generation via Contrastive Noise Optimization
By: Byungjun Kim, Soobin Um, Jong Chul Ye
Potential Business Impact:
Makes AI pictures more different and interesting.
Text-to-image (T2I) diffusion models have demonstrated impressive performance in generating high-fidelity images, largely enabled by text-guided inference. However, this advantage often comes with a critical drawback: limited diversity, as outputs tend to collapse into similar modes under strong text guidance. Existing approaches typically optimize intermediate latents or text conditions during inference, but these methods deliver only modest gains or remain sensitive to hyperparameter tuning. In this work, we introduce Contrastive Noise Optimization, a simple yet effective method that addresses the diversity issue from a distinct perspective. Unlike prior techniques that adapt intermediate latents, our approach shapes the initial noise to promote diverse outputs. Specifically, we develop a contrastive loss defined in the Tweedie data space and optimize a batch of noise latents. Our contrastive optimization repels instances within the batch to maximize diversity while keeping them anchored to a reference sample to preserve fidelity. We further provide theoretical insights into the mechanism of this preprocessing to substantiate its effectiveness. Extensive experiments across multiple T2I backbones demonstrate that our approach achieves a superior quality-diversity Pareto frontier while remaining robust to hyperparameter choices.
Similar Papers
Diverse Text-to-Image Generation via Contrastive Noise Optimization
Graphics
Makes AI pictures more different and interesting.
Context-guided Responsible Data Augmentation with Diffusion Models
CV and Pattern Recognition
Makes AI better at recognizing pictures by adding fake ones.
NOFT: Test-Time Noise Finetune via Information Bottleneck for Highly Correlated Asset Creation
CV and Pattern Recognition
Makes AI create many different pictures from one.