Aligning Latent Spaces with Flow Priors
By: Yizhuo Li , Yuying Ge , Yixiao Ge and more
Potential Business Impact:
Makes AI create more realistic pictures.
This paper presents a novel framework for aligning learnable latent spaces to arbitrary target distributions by leveraging flow-based generative models as priors. Our method first pretrains a flow model on the target features to capture the underlying distribution. This fixed flow model subsequently regularizes the latent space via an alignment loss, which reformulates the flow matching objective to treat the latents as optimization targets. We formally prove that minimizing this alignment loss establishes a computationally tractable surrogate objective for maximizing a variational lower bound on the log-likelihood of latents under the target distribution. Notably, the proposed method eliminates computationally expensive likelihood evaluations and avoids ODE solving during optimization. As a proof of concept, we demonstrate in a controlled setting that the alignment loss landscape closely approximates the negative log-likelihood of the target distribution. We further validate the effectiveness of our approach through large-scale image generation experiments on ImageNet with diverse target distributions, accompanied by detailed discussions and ablation studies. With both theoretical and empirical validation, our framework paves a new way for latent space alignment.
Similar Papers
Flows and Diffusions on the Neural Manifold
Machine Learning (CS)
Makes AI learn better and find bad AI behavior.
Solving Inverse Problems with FLAIR
CV and Pattern Recognition
Makes blurry pictures sharp and clear.
Latent Refinement via Flow Matching for Training-free Linear Inverse Problem Solving
CV and Pattern Recognition
Makes blurry pictures clear using smart math.