Predictive posteriors under hidden confounding
By: Carlos García Meixide, David Ríos Insua
Potential Business Impact:
Finds hidden causes for better predictions.
Predicting outcomes in external domains is challenging due to hidden confounders that influence both predictors and outcomes, complicating generalization under distribution shifts. Traditional methods often rely on stringent assumptions or overly conservative regularization, compromising estimation and predictive accuracy. Generative Invariance (GI) is a novel framework that facilitates predictions in unseen domains without requiring hyperparameter tuning or knowledge of specific distribution shifts. However, the available frequentist version of GI does not always enable identification and lacks uncertainty quantification for its predictions. This paper develops a Bayesian formulation that extends GI with well-calibrated external predictions and facilitates causal discovery. We present theoretical guarantees showing that prior distributions assign asymptotic meaning to the number of distinct datasets that could be observed. Simulations and an application case highlight the remarkable empirical coverage behavior of our approach, nearly unchanged when transitioning from low- to moderate-dimensional settings.
Similar Papers
Generative Classifier for Domain Generalization
CV and Pattern Recognition
Teaches computers to see better in new places.
Global Variational Inference Enhanced Robust Domain Adaptation
Machine Learning (CS)
Helps computers learn from different data better.
AI-Powered Bayesian Inference
Methodology
Makes AI answers more trustworthy for decisions.