Flows and Diffusions on the Neural Manifold
By: Daniel Saragih, Deyu Cao, Tejas Balaji
Potential Business Impact:
Makes AI learn better and find bad AI behavior.
Diffusion and flow-based generative models have achieved remarkable success in domains such as image synthesis, video generation, and natural language modeling. In this work, we extend these advances to weight space learning by leveraging recent techniques to incorporate structural priors derived from optimization dynamics. Central to our approach is modeling the trajectory induced by gradient descent as a trajectory inference problem. We unify several trajectory inference techniques under the framework of gradient flow matching, providing a theoretical framework for treating optimization paths as inductive bias. We further explore architectural and algorithmic choices, including reward fine-tuning by adjoint matching, the use of autoencoders for latent weight representation, conditioning on task-specific context data, and adopting informative source distributions such as Kaiming uniform. Experiments demonstrate that our method matches or surpasses baselines in generating in-distribution weights, improves initialization for downstream training, and supports fine-tuning to enhance performance. Finally, we illustrate a practical application in safety-critical systems: detecting harmful covariate shifts, where our method outperforms the closest comparable baseline.
Similar Papers
Geometric Flow Models over Neural Network Weights
Machine Learning (CS)
Makes AI learn new tasks with less data.
Generative Learning of Densities on Manifolds
Machine Learning (CS)
Creates new realistic pictures from simple ideas.
Aligning Latent Spaces with Flow Priors
Machine Learning (CS)
Makes AI create more realistic pictures.