Robust Optimization in Causal Models and G-Causal Normalizing Flows
By: Gabriele Visentin, Patrick Cheridito
Potential Business Impact:
Helps computers learn better from cause-and-effect.
In this paper, we show that interventionally robust optimization problems in causal models are continuous under the $G$-causal Wasserstein distance, but may be discontinuous under the standard Wasserstein distance. This highlights the importance of using generative models that respect the causal structure when augmenting data for such tasks. To this end, we propose a new normalizing flow architecture that satisfies a universal approximation property for causal structural models and can be efficiently trained to minimize the $G$-causal Wasserstein distance. Empirically, we demonstrate that our model outperforms standard (non-causal) generative models in data augmentation for causal regression and mean-variance portfolio optimization in causal factor models.
Similar Papers
Worst-case generation via minimax optimization in Wasserstein space
Machine Learning (Stat)
Finds worst-case problems for computers to fix.
Group Distributionally Robust Machine Learning under Group Level Distributional Uncertainty
Machine Learning (CS)
Makes AI fair for everyone, even small groups.
Unregularized limit of stochastic gradient method for Wasserstein distributionally robust optimization
Optimization and Control
Makes computer learning better with uncertain information.