Uncertainty Quantification for Prior-Data Fitted Networks using Martingale Posteriors
By: Thomas Nagler, David Rügamer
Potential Business Impact:
Shows how sure a computer is about its guesses.
Prior-data fitted networks (PFNs) have emerged as promising foundation models for prediction from tabular data sets, achieving state-of-the-art performance on small to moderate data sizes without tuning. While PFNs are motivated by Bayesian ideas, they do not provide any uncertainty quantification for predictive means, quantiles, or similar quantities. We propose a principled and efficient sampling procedure to construct Bayesian posteriors for such estimates based on Martingale posteriors, and prove its convergence. Several simulated and real-world data examples showcase the uncertainty quantification of our method in inference applications.
Similar Papers
Understanding the Trade-offs in Accuracy and Uncertainty Quantification: Architecture and Inference Choices in Bayesian Neural Networks
Machine Learning (CS)
Makes AI smarter and more sure of its answers.
Martingale Posteriors from Score Functions
Methodology
Predicts future events better than guessing.
A Conformal Prediction Framework for Uncertainty Quantification in Physics-Informed Neural Networks
Machine Learning (CS)
Makes computer models show how sure they are.