Score: 3

Are Domain Generalization Benchmarks with Accuracy on the Line Misspecified?

Published: March 31, 2025 | arXiv ID: 2504.00186v3

By: Olawale Salaudeen , Nicole Chiou , Shiny Weng and more

BigTech Affiliations: Stanford University Massachusetts Institute of Technology

Potential Business Impact:

Fixes AI that cheats by using bad shortcuts.

Business Areas:
A/B Testing Data and Analytics

Spurious correlations, unstable statistical shortcuts a model can exploit, are expected to degrade performance out-of-distribution (OOD). However, across many popular OOD generalization benchmarks, vanilla empirical risk minimization (ERM) often achieves the highest OOD accuracy. Moreover, gains in in-distribution accuracy generally improve OOD accuracy, a phenomenon termed accuracy on the line, which contradicts the expected harm of spurious correlations. We show that these observations are an artifact of misspecified OOD datasets that do not include shifts in spurious correlations that harm OOD generalization, the setting they are meant to evaluate. Consequently, current practice evaluates "robustness" without truly stressing the spurious signals we seek to eliminate; our work pinpoints when that happens and how to fix it. Contributions. (i) We derive necessary and sufficient conditions for a distribution shift to reveal a model's reliance on spurious features; when these conditions hold, "accuracy on the line" disappears. (ii) We audit leading OOD datasets and find that most still display accuracy on the line, suggesting they are misspecified for evaluating robustness to spurious correlations. (iii) We catalog the few well-specified datasets and summarize generalizable design principles, such as identifying datasets of natural interventions (e.g., a pandemic), to guide future well-specified benchmarks.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
79 pages

Category
Computer Science:
Machine Learning (CS)