Diverse Text-to-Image Generation via Contrastive Noise Optimization
By: Byungjun Kim, Soobin Um, Jong Chul Ye
Potential Business Impact:
Makes AI pictures more different and interesting.
Text-to-image (T2I) diffusion models have demonstrated impressive performance in generating high-fidelity images, largely enabled by text-guided inference. However, this advantage often comes with a critical drawback: limited diversity, as outputs tend to collapse into similar modes under strong text guidance. Existing approaches typically optimize intermediate latents or text conditions during inference, but these methods deliver only modest gains or remain sensitive to hyperparameter tuning. In this work, we introduce Contrastive Noise Optimization, a simple yet effective method that addresses the diversity issue from a distinct perspective. Unlike prior techniques that adapt intermediate latents, our approach shapes the initial noise to promote diverse outputs. Specifically, we develop a contrastive loss defined in the Tweedie data space and optimize a batch of noise latents. Our contrastive optimization repels instances within the batch to maximize diversity while keeping them anchored to a reference sample to preserve fidelity. We further provide theoretical insights into the mechanism of this preprocessing to substantiate its effectiveness. Extensive experiments across multiple T2I backbones demonstrate that our approach achieves a superior quality-diversity Pareto frontier while remaining robust to hyperparameter choices.
Similar Papers
Diverse Text-to-Image Generation via Contrastive Noise Optimization
Graphics
Makes AI pictures more different and interesting.
Highly Efficient Test-Time Scaling for T2I Diffusion Models with Text Embedding Perturbation
CV and Pattern Recognition
Makes AI art more creative and detailed.
Single-Reference Text-to-Image Manipulation with Dual Contrastive Denoising Score
CV and Pattern Recognition
Edits photos using words, keeping original look.