Input Adaptive Bayesian Model Averaging
By: Yuli Slavutsky, Sebastian Salazar, David M. Blei
Potential Business Impact:
Combines best guesses for smarter predictions.
This paper studies prediction with multiple candidate models, where the goal is to combine their outputs. This task is especially challenging in heterogeneous settings, where different models may be better suited to different inputs. We propose input adaptive Bayesian Model Averaging (IA-BMA), a Bayesian method that assigns model weights conditional on the input. IA-BMA employs an input adaptive prior, and yields a posterior distribution that adapts to each prediction, which we estimate with amortized variational inference. We derive formal guarantees for its performance, relative to any single predictor selected per input. We evaluate IABMA across regression and classification tasks, studying data from personalized cancer treatment, credit-card fraud detection, and UCI datasets. IA-BMA consistently delivers more accurate and better-calibrated predictions than both non-adaptive baselines and existing adaptive methods.
Similar Papers
Flexible extreme thresholds through generalised Bayesian model averaging
Methodology
Finds best ways to predict big insurance payouts.
Bayesian Model Averaging in Causal Instrumental Variable Models
Methodology
Finds true causes even with hidden problems.
EMA Without the Lag: Bias-Corrected Iterate Averaging Schemes
Machine Learning (CS)
Makes AI learn faster and better.