Controlling Latent Diffusion Using Latent CLIP
By: Jason Becker , Chris Wendler , Peter Baylies and more
Potential Business Impact:
Makes AI create pictures faster and safer.
Instead of performing text-conditioned denoising in the image domain, latent diffusion models (LDMs) operate in latent space of a variational autoencoder (VAE), enabling more efficient processing at reduced computational costs. However, while the diffusion process has moved to the latent space, the contrastive language-image pre-training (CLIP) models, as used in many image processing tasks, still operate in pixel space. Doing so requires costly VAE-decoding of latent images before they can be processed. In this paper, we introduce Latent-CLIP, a CLIP model that operates directly in the latent space. We train Latent-CLIP on 2.7B pairs of latent images and descriptive texts, and show that it matches zero-shot classification performance of similarly sized CLIP models on both the ImageNet benchmark and a LDM-generated version of it, demonstrating its effectiveness in assessing both real and generated content. Furthermore, we construct Latent-CLIP rewards for reward-based noise optimization (ReNO) and show that they match the performance of their CLIP counterparts on GenEval and T2I-CompBench while cutting the cost of the total pipeline by 21%. Finally, we use Latent-CLIP to guide generation away from harmful content, achieving strong performance on the inappropriate image prompts (I2P) benchmark and a custom evaluation, without ever requiring the costly step of decoding intermediate images.
Similar Papers
Is CLIP ideal? No. Can we fix it? Yes!
Machine Learning (CS)
Makes AI understand images and words better.
Beyond the Noise: Aligning Prompts with Latent Representations in Diffusion Models
CV and Pattern Recognition
Finds bad AI pictures while they're still being made.
LeakyCLIP: Extracting Training Data from CLIP
Cryptography and Security
Steals private pictures from AI's memory.