Text-to-Image Alignment in Denoising-Based Models through Step Selection
By: Paul Grimal, Hervé Le Borgne, Olivier Ferret
Potential Business Impact:
Makes AI pictures match words better.
Visual generative AI models often encounter challenges related to text-image alignment and reasoning limitations. This paper presents a novel method for selectively enhancing the signal at critical denoising steps, optimizing image generation based on input semantics. Our approach addresses the shortcomings of early-stage signal modifications, demonstrating that adjustments made at later stages yield superior results. We conduct extensive experiments to validate the effectiveness of our method in producing semantically aligned images on Diffusion and Flow Matching model, achieving state-of-the-art performance. Our results highlight the importance of a judicious choice of sampling stage to improve performance and overall image alignment.
Similar Papers
Re-Thinking the Automatic Evaluation of Image-Text Alignment in Text-to-Image Models
Computation and Language
Makes AI pictures match words better.
Fine-Grained Alignment and Noise Refinement for Compositional Text-to-Image Generation
CV and Pattern Recognition
Makes AI pictures match text descriptions better.
Asynchronous Denoising Diffusion Models for Aligning Text-to-Image Generation
CV and Pattern Recognition
Makes AI pictures match words better.