Toward Ethical AI Through Bayesian Uncertainty in Neural Question Answering
By: Riccardo Di Sipio
Potential Business Impact:
Helps AI say "I don't know" when unsure.
We explore Bayesian reasoning as a means to quantify uncertainty in neural networks for question answering. Starting with a multilayer perceptron on the Iris dataset, we show how posterior inference conveys confidence in predictions. We then extend this to language models, applying Bayesian inference first to a frozen head and finally to LoRA-adapted transformers, evaluated on the CommonsenseQA benchmark. Rather than aiming for state-of-the-art accuracy, we compare Laplace approximations against maximum a posteriori (MAP) estimates to highlight uncertainty calibration and selective prediction. This allows models to abstain when confidence is low. An ``I don't know'' response not only improves interpretability but also illustrates how Bayesian methods can contribute to more responsible and ethical deployment of neural question-answering systems.
Similar Papers
Uncertainty Reasoning with Photonic Bayesian Machines
Machine Learning (CS)
Makes AI know when it's unsure, improving safety.
Uncertainty-Aware Data-Efficient AI: An Information-Theoretic Perspective
Information Theory
Teaches computers to learn more with less data.
Bayesian--AI Fusion for Epidemiological Decision Making: Calibrated Risk, Honest Uncertainty, and Hyperparameter Intelligence
Machine Learning (Stat)
Makes AI better at predicting health risks.