Inference-Time Alignment of Diffusion Models with Evolutionary Algorithms
By: Purvish Jajal , Nick John Eliopoulos , Benjamin Shiue-Hal Chou and more
Potential Business Impact:
Makes AI art follow rules and be safer.
Diffusion models are state-of-the-art generative models in various domains, yet their samples often fail to satisfy downstream objectives such as safety constraints or domain-specific validity. Existing techniques for alignment require gradients, internal model access, or large computational budgets. We introduce an inference-time alignment framework based on evolutionary algorithms. We treat diffusion models as black-boxes and search their latent space to maximize alignment objectives. Our method enables efficient inference-time alignment for both differentiable and non-differentiable alignment objectives across a range of diffusion models. On the DrawBench and Open Image Preferences benchmark, our EA methods outperform state-of-the-art gradient-based and gradient-free inference-time methods. In terms of memory consumption, we require 55% to 76% lower GPU memory than gradient-based methods. In terms of running-time, we are 72% to 80% faster than gradient-based methods. We achieve higher alignment scores over 50 optimization steps on Open Image Preferences than gradient-based and gradient-free methods.
Similar Papers
Dynamic Search for Inference-Time Alignment in Diffusion Models
Machine Learning (CS)
Makes AI create better things by searching smarter.
Evolvable Conditional Diffusion
Machine Learning (CS)
Helps computers discover new science without math.
Evolutionary Caching to Accelerate Your Off-the-Shelf Diffusion Model
CV and Pattern Recognition
Makes AI art programs create pictures much faster.