Mitigating Spurious Correlation via Distributionally Robust Learning with Hierarchical Ambiguity Sets
By: Sung Ho Jo, Seonghwi Kim, Minwoo Chae
Potential Business Impact:
Makes AI work better when data changes.
Conventional supervised learning methods are often vulnerable to spurious correlations, particularly under distribution shifts in test data. To address this issue, several approaches, most notably Group DRO, have been developed. While these methods are highly robust to subpopulation or group shifts, they remain vulnerable to intra-group distributional shifts, which frequently occur in minority groups with limited samples. We propose a hierarchical extension of Group DRO that addresses both inter-group and intra-group uncertainties, providing robustness to distribution shifts at multiple levels. We also introduce new benchmark settings that simulate realistic minority group distribution shifts-an important yet previously underexplored challenge in spurious correlation research. Our method demonstrates strong robustness under these conditions-where existing robust learning methods consistently fail-while also achieving superior performance on standard benchmarks. These results highlight the importance of broadening the ambiguity set to better capture both inter-group and intra-group distributional uncertainties.
Similar Papers
Group Distributionally Robust Machine Learning under Group Level Distributional Uncertainty
Machine Learning (CS)
Makes AI fair for everyone, even small groups.
Statistical Inference for Conditional Group Distributionally Robust Optimization with Cross-Entropy Loss
Methodology
Helps computers learn from many different examples.
Distributionally Robust Optimization with Adversarial Data Contamination
Machine Learning (CS)
Protects computer learning from bad data and changes.