Disentangled representations via score-based variational autoencoders
By: Benjamin S. H. Lyo, Eero P. Simoncelli, Cristina Savin
We present the Score-based Autoencoder for Multiscale Inference (SAMI), a method for unsupervised representation learning that combines the theoretical frameworks of diffusion models and VAEs. By unifying their respective evidence lower bounds, SAMI formulates a principled objective that learns representations through score-based guidance of the underlying diffusion process. The resulting representations automatically capture meaningful structure in the data: it recovers ground truth generative factors in our synthetic dataset, learns factorized, semantic latent dimensions from complex natural images, and encodes video sequences into latent trajectories that are straighter than those of alternative encoders, despite training exclusively on static images. Furthermore, SAMI can extract useful representations from pre-trained diffusion models with minimal additional training. Finally, the explicitly probabilistic formulation provides new ways to identify semantically meaningful axes in the absence of supervised labels, and its mathematical exactness allows us to make formal statements about the nature of the learned representation. Overall, these results indicate that implicit structural information in diffusion models can be made explicit and interpretable through synergistic combination with a variational autoencoder.
Similar Papers
Knowledge-Guided Masked Autoencoder with Linear Spectral Mixing and Spectral-Angle-Aware Reconstruction
Machine Learning (CS)
Teaches computers science rules for better learning.
Group Equivariance Meets Mechanistic Interpretability: Equivariant Sparse Autoencoders
Machine Learning (CS)
Finds hidden patterns in data using math.
Evaluating Sparse Autoencoders for Monosemantic Representation
Machine Learning (CS)
Makes AI understand ideas more clearly.