Hi-fi functional priors by learning activations
By: Marcin Sendera, Amin Sorkhei, Tomasz Kuśmierczyk
Potential Business Impact:
Makes smart computers learn better with new brain parts.
Function-space priors in Bayesian Neural Networks (BNNs) provide a more intuitive approach to embedding beliefs directly into the model's output, thereby enhancing regularization, uncertainty quantification, and risk-aware decision-making. However, imposing function-space priors on BNNs is challenging. We address this task through optimization techniques that explore how trainable activations can accommodate higher-complexity priors and match intricate target function distributions. We investigate flexible activation models, including Pade functions and piecewise linear functions, and discuss the learning challenges related to identifiability, loss construction, and symmetries. Our empirical findings indicate that even BNNs with a single wide hidden layer when equipped with flexible trainable activation, can effectively achieve desired function-space priors.
Similar Papers
Bayesian neural networks with interpretable priors from Mercer kernels
Machine Learning (Stat)
Makes smart computers understand when they're unsure.
Developing Training Procedures for Piecewise-linear Spline Activation Functions in Neural Networks
Machine Learning (CS)
Makes computer brains learn better and faster.
Accelerated Execution of Bayesian Neural Networks using a Single Probabilistic Forward Pass and Code Generation
Machine Learning (CS)
Makes AI safer by knowing when it's wrong.