Distributionally Robust Optimization via Diffusion Ambiguity Modeling
By: Jiaqi Wen, Jianyi Yang
Potential Business Impact:
Makes computer learning better with new data.
This paper studies Distributionally Robust Optimization (DRO), a fundamental framework for enhancing the robustness and generalization of statistical learning and optimization. An effective ambiguity set for DRO must involve distributions that remain consistent with the nominal distribution while being diverse enough to account for a variety of potential scenarios. Moreover, it should lead to tractable DRO solutions. To this end, we propose a diffusion-based ambiguity set design that captures various adversarial distributions beyond the nominal support space while maintaining consistency with the nominal distribution. Building on this ambiguity modeling, we propose Diffusion-based DRO (D-DRO), a tractable DRO algorithm that solves the inner maximization over the parameterized diffusion model space. We formally establish the stationary convergence performance of D-DRO and empirically demonstrate its superior Out-of-Distribution (OOD) generalization performance in a ML prediction task.
Similar Papers
Decision Making under Model Misspecification: DRO with Robust Bayesian Ambiguity Sets
Machine Learning (Stat)
Protects decisions when data is messy or wrong.
Distributionally Robust Optimization with Adversarial Data Contamination
Machine Learning (CS)
Protects computer learning from bad data and changes.
Distributionally Robust Graph Out-of-Distribution Recommendation via Diffusion Model
Machine Learning (CS)
Makes movie suggestions better, even with bad data.