EviNAM: Intelligibility and Uncertainty via Evidential Neural Additive Models
By: Sören Schleibaum , Anton Frederik Thielmann , Julian Teusch and more
Intelligibility and accurate uncertainty estimation are crucial for reliable decision-making. In this paper, we propose EviNAM, an extension of evidential learning that integrates the interpretability of Neural Additive Models (NAMs) with principled uncertainty estimation. Unlike standard Bayesian neural networks and previous evidential methods, EviNAM enables, in a single pass, both the estimation of the aleatoric and epistemic uncertainty as well as explicit feature contributions. Experiments on synthetic and real data demonstrate that EviNAM matches state-of-the-art predictive performance. While we focus on regression, our method extends naturally to classification and generalized additive models, offering a path toward more intelligible and trustworthy predictions.
Similar Papers
Fairness-Aware Multi-view Evidential Learning with Adaptive Prior
Machine Learning (CS)
Makes AI fairer by balancing its learning.
Evidential Physics-Informed Neural Networks for Scientific Discovery
Machine Learning (CS)
Helps computers guess how things work, even when unsure.
EVolutionary Independent DEtermiNistiC Explanation
Machine Learning (CS)
Shows how AI makes choices for better results.