From Overfitting to Reliability: Introducing the Hierarchical Approximate Bayesian Neural Network
By: Hayk Amirkhanian, Marco F. Huber
In recent years, neural networks have revolutionized various domains, yet challenges such as hyperparameter tuning and overfitting remain significant hurdles. Bayesian neural networks offer a framework to address these challenges by incorporating uncertainty directly into the model, yielding more reliable predictions, particularly for out-of-distribution data. This paper presents Hierarchical Approximate Bayesian Neural Network, a novel approach that uses a Gaussian-inverse-Wishart distribution as a hyperprior of the network's weights to increase both the robustness and performance of the model. We provide analytical representations for the predictive distribution and weight posterior, which amount to the calculation of the parameters of Student's t-distributions in closed form with linear complexity with respect to the number of weights. Our method demonstrates robust performance, effectively addressing issues of overfitting and providing reliable uncertainty estimates, particularly for out-of-distribution tasks. Experimental results indicate that HABNN not only matches but often outperforms state-of-the-art models, suggesting a promising direction for future applications in safety-critical environments.
Similar Papers
Towards the Next-generation Bayesian Network Classifiers
Machine Learning (CS)
Helps computers understand complicated patterns better.
Walking on the Fiber: A Simple Geometric Approximation for Bayesian Neural Networks
Machine Learning (CS)
Lets computers learn with less guessing.
Quantification of Uncertainties in Probabilistic Deep Neural Network by Implementing Boosting of Variational Inference
Machine Learning (CS)
Makes AI smarter and more sure of answers.