Riesz Representer Fitting under Bregman Divergence: A Unified Framework for Debiased Machine Learning
By: Masahiro Kato
Potential Business Impact:
Unifies ways to make computer predictions more accurate.
Estimating the Riesz representer is a central problem in debiased machine learning for causal and structural parameter estimation. Various methods for Riesz representer estimation have been proposed, including Riesz regression and covariate balancing. This study unifies these methods within a single framework. Our framework fits a Riesz representer model to the true Riesz representer under a Bregman divergence, which includes the squared loss and the Kullback--Leibler (KL) divergence as special cases. We show that the squared loss corresponds to Riesz regression, and the KL divergence corresponds to tailored loss minimization, where the dual solutions correspond to stable balancing weights and entropy balancing weights, respectively, under specific model specifications. We refer to our method as generalized Riesz regression, and we refer to the associated duality as automatic covariate balancing. Our framework also generalizes density ratio fitting under a Bregman divergence to Riesz representer estimation, and it includes various applications beyond density ratio estimation. We also provide a convergence analysis for both cases where the model class is a reproducing kernel Hilbert space (RKHS) and where it is a neural network.
Similar Papers
Direct Debiased Machine Learning via Bregman Divergence Minimization
Econometrics
Makes computer predictions more accurate and fair.
Direct Debiased Machine Learning via Bregman Divergence Minimization
Econometrics
Makes computer predictions more accurate and fair.
ScoreMatchingRiesz: Auto-DML with Infinitesimal Classification
Econometrics
Helps computers learn from data without overfitting.