Walking on the Fiber: A Simple Geometric Approximation for Bayesian Neural Networks
By: Alfredo Reichlin, Miguel Vasco, Danica Kragic
Potential Business Impact:
Lets computers learn with less guessing.
Bayesian Neural Networks provide a principled framework for uncertainty quantification by modeling the posterior distribution of network parameters. However, exact posterior inference is computationally intractable, and widely used approximations like the Laplace method struggle with scalability and posterior accuracy in modern deep networks. In this work, we revisit sampling techniques for posterior exploration, proposing a simple variation tailored to efficiently sample from the posterior in over-parameterized networks by leveraging the low-dimensional structure of loss minima. Building on this, we introduce a model that learns a deformation of the parameter space, enabling rapid posterior sampling without requiring iterative methods. Empirical results demonstrate that our approach achieves competitive posterior approximations with improved scalability compared to recent refinement techniques. These contributions provide a practical alternative for Bayesian inference in deep learning.
Similar Papers
Fiber Bundle Networks: A Geometric Machine Learning Paradigm
Machine Learning (CS)
Makes AI understand things by seeing patterns.
VIKING: Deep variational inference with stochastic projections
Machine Learning (Stat)
Makes smart computer programs more accurate and reliable.
Efficient Approximate Posterior Sampling with Annealed Langevin Monte Carlo
Machine Learning (CS)
Makes AI create realistic images from messy data.