Score: 0

Bias as a Virtue: Rethinking Generalization under Distribution Shifts

Published: May 31, 2025 | arXiv ID: 2506.00407v1

By: Ruixuan Chen , Wentao Li , Jiahui Xiao and more

Potential Business Impact:

Makes computer learning work better on new data.

Business Areas:
A/B Testing Data and Analytics

Machine learning models often degrade when deployed on data distributions different from their training data. Challenging conventional validation paradigms, we demonstrate that higher in-distribution (ID) bias can lead to better out-of-distribution (OOD) generalization. Our Adaptive Distribution Bridge (ADB) framework implements this insight by introducing controlled statistical diversity during training, enabling models to develop bias profiles that effectively generalize across distributions. Empirically, we observe a robust negative correlation where higher ID bias corresponds to lower OOD error--a finding that contradicts standard practices focused on minimizing validation error. Evaluation on multiple datasets shows our approach significantly improves OOD generalization. ADB achieves robust mean error reductions of up to 26.8% compared to traditional cross-validation, and consistently identifies high-performing training strategies, evidenced by percentile ranks often exceeding 74.4%. Our work provides both a practical method for improving generalization and a theoretical framework for reconsidering the role of bias in robust machine learning.

Country of Origin
🇨🇳 China

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)