Learning density ratios in causal inference using Bregman-Riesz regression
By: Oliver J. Hines, Caleb H. Miles
Potential Business Impact:
Helps computers learn from different data sources.
The ratio of two probability density functions is a fundamental quantity that appears in many areas of statistics and machine learning, including causal inference, reinforcement learning, covariate shift, outlier detection, independence testing, importance sampling, and diffusion modeling. Naively estimating the numerator and denominator densities separately using, e.g., kernel density estimators, can lead to unstable performance and suffers from the curse of dimensionality as the number of covariates increases. For this reason, several methods have been developed for estimating the density ratio directly based on (a) Bregman divergences or (b) recasting the density ratio as the odds in a probabilistic classification model that predicts whether an observation is sampled from the numerator or denominator distribution. Additionally, the density ratio can be viewed as the Riesz representer of a continuous linear map, making it amenable to estimation via (c) minimization of the so-called Riesz loss, which was developed to learn the Riesz representer in the Riesz regression procedure in causal inference. In this paper we show that all three of these methods can be unified in a common framework, which we call Bregman-Riesz regression. We further show how data augmentation techniques can be used to apply density ratio learning methods to causal problems, where the numerator distribution typically represents an unobserved intervention. We show through simulations how the choice of Bregman divergence and data augmentation strategy can affect the performance of the resulting density ratio learner. A Python package is provided for researchers to apply Bregman-Riesz regression in practice using gradient boosting, neural networks, and kernel methods.
Similar Papers
Riesz Regression As Direct Density Ratio Estimation
Machine Learning (Stat)
Makes computer learning fairer and more accurate.
Estimating Unbounded Density Ratios: Applications in Error Control under Covariate Shift
Machine Learning (Stat)
Makes computer learning better with tricky data.
Distributional Evaluation of Generative Models via Relative Density Ratio
Methodology
Checks if computer-made pictures look real.