Score: 0

Hi-fi functional priors by learning activations

Published: August 12, 2025 | arXiv ID: 2508.08880v1

By: Marcin Sendera, Amin Sorkhei, Tomasz Kuśmierczyk

Potential Business Impact:

Makes smart computers learn better with new brain parts.

Function-space priors in Bayesian Neural Networks (BNNs) provide a more intuitive approach to embedding beliefs directly into the model's output, thereby enhancing regularization, uncertainty quantification, and risk-aware decision-making. However, imposing function-space priors on BNNs is challenging. We address this task through optimization techniques that explore how trainable activations can accommodate higher-complexity priors and match intricate target function distributions. We investigate flexible activation models, including Pade functions and piecewise linear functions, and discuss the learning challenges related to identifiability, loss construction, and symmetries. Our empirical findings indicate that even BNNs with a single wide hidden layer when equipped with flexible trainable activation, can effectively achieve desired function-space priors.

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)