Beyond the Noise: Aligning Prompts with Latent Representations in Diffusion Models
By: Vasco Ramos , Regev Cohen , Idan Szpektor and more
Conditional diffusion models rely on language-to-image alignment methods to steer the generation towards semantically accurate outputs. Despite the success of this architecture, misalignment and hallucinations remain common issues and require automatic misalignment detection tools to improve quality, for example by applying them in a Best-of-N (BoN) post-generation setting. Unfortunately, measuring the alignment after the generation is an expensive step since we need to wait for the overall generation to finish to determine prompt adherence. In contrast, this work hypothesizes that text/image misalignments can be detected early in the denoising process, enabling real-time alignment assessment without waiting for the complete generation. In particular, we propose NoisyCLIP a method that measures semantic alignment in the noisy latent space. This work is the first to explore and benchmark prompt-to-latent misalignment detection during image generation using dual encoders in the reverse diffusion process. We evaluate NoisyCLIP qualitatively and quantitatively and find it reduces computational cost by 50% while achieving 98% of CLIP alignment performance in BoN settings. This approach enables real-time alignment assessment during generation, reducing costs without sacrificing semantic fidelity.
Similar Papers
Noise Projection: Closing the Prompt-Agnostic Gap Behind Text-to-Image Misalignment in Diffusion Models
CV and Pattern Recognition
Makes AI pictures match words better.
Reusing Computation in Text-to-Image Diffusion for Efficient Generation of Image Sets
CV and Pattern Recognition
Makes AI art faster and cheaper.
Controlling Latent Diffusion Using Latent CLIP
CV and Pattern Recognition
Makes AI create pictures faster and safer.