Score: 1

Toward Ethical AI Through Bayesian Uncertainty in Neural Question Answering

Published: December 19, 2025 | arXiv ID: 2512.17677v1

By: Riccardo Di Sipio

Potential Business Impact:

Helps AI say "I don't know" when unsure.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We explore Bayesian reasoning as a means to quantify uncertainty in neural networks for question answering. Starting with a multilayer perceptron on the Iris dataset, we show how posterior inference conveys confidence in predictions. We then extend this to language models, applying Bayesian inference first to a frozen head and finally to LoRA-adapted transformers, evaluated on the CommonsenseQA benchmark. Rather than aiming for state-of-the-art accuracy, we compare Laplace approximations against maximum a posteriori (MAP) estimates to highlight uncertainty calibration and selective prediction. This allows models to abstain when confidence is low. An ``I don't know'' response not only improves interpretability but also illustrates how Bayesian methods can contribute to more responsible and ethical deployment of neural question-answering systems.

Page Count
14 pages

Category
Computer Science:
Computation and Language