Score: 0

EviNAM: Intelligibility and Uncertainty via Evidential Neural Additive Models

Published: January 13, 2026 | arXiv ID: 2601.08556v1

By: Sören Schleibaum , Anton Frederik Thielmann , Julian Teusch and more

Intelligibility and accurate uncertainty estimation are crucial for reliable decision-making. In this paper, we propose EviNAM, an extension of evidential learning that integrates the interpretability of Neural Additive Models (NAMs) with principled uncertainty estimation. Unlike standard Bayesian neural networks and previous evidential methods, EviNAM enables, in a single pass, both the estimation of the aleatoric and epistemic uncertainty as well as explicit feature contributions. Experiments on synthetic and real data demonstrate that EviNAM matches state-of-the-art predictive performance. While we focus on regression, our method extends naturally to classification and generalized additive models, offering a path toward more intelligible and trustworthy predictions.

Category
Computer Science:
Machine Learning (CS)