Test-Time Alignment of Text-to-Image Diffusion Models via Null-Text Embedding Optimisation
By: Taehoon Kim, Henry Gouk, Timothy Hospedales
Potential Business Impact:
Makes AI create better pictures by guiding its thoughts.
Test-time alignment (TTA) aims to adapt models to specific rewards during inference. However, existing methods tend to either under-optimise or over-optimise (reward hack) the target reward function. We propose Null-Text Test-Time Alignment (Null-TTA), which aligns diffusion models by optimising the unconditional embedding in classifier-free guidance, rather than manipulating latent or noise variables. Due to the structured semantic nature of the text embedding space, this ensures alignment occurs on a semantically coherent manifold and prevents reward hacking (exploiting non-semantic noise patterns to improve the reward). Since the unconditional embedding in classifier-free guidance serves as the anchor for the model's generative distribution, Null-TTA directly steers model's generative distribution towards the target reward rather than just adjusting the samples, even without updating model parameters. Thanks to these desirable properties, we show that Null-TTA achieves state-of-the-art target test-time alignment while maintaining strong cross-reward generalisation. This establishes semantic-space optimisation as an effective and principled novel paradigm for TTA.
Similar Papers
Highly Efficient Test-Time Scaling for T2I Diffusion Models with Text Embedding Perturbation
CV and Pattern Recognition
Makes AI art more creative and detailed.
ETTA: Efficient Test-Time Adaptation for Vision-Language Models through Dynamic Embedding Updates
CV and Pattern Recognition
Makes AI better at understanding new pictures.
Backpropagation-Free Test-Time Adaptation via Probabilistic Gaussian Alignment
CV and Pattern Recognition
Makes AI better at guessing without retraining.