Representation Invariance and Allocation: When Subgroup Balance Matters
By: Anissa Alloula , Charles Jones , Zuzanna Wakefield-Skorniewska and more
Unequal representation of demographic groups in training data poses challenges to model generalisation across populations. Standard practice assumes that balancing subgroup representation optimises performance. However, recent empirical results contradict this assumption: in some cases, imbalanced data distributions actually improve subgroup performance, while in others, subgroup performance remains unaffected by the absence of an entire subgroup during training. We conduct a systematic study of subgroup allocation across four vision and language models, varying training data composition to characterise the sensitivity of subgroup performance to data balance. We propose the latent separation hypothesis, which states that a partially fine-tuned model's dependence on subgroup representation is determined by the degree of separation between subgroups in the latent space of the pre-trained model. We formalise this hypothesis, provide theoretical analysis, and validate it empirically. Finally, we present a practical application to foundation model fine-tuning, demonstrating that quantitative analysis of latent subgroup separation can inform data collection and balancing decisions.
Similar Papers
When Are Learning Biases Equivalent? A Unifying Framework for Fairness, Robustness, and Distribution Shift
Machine Learning (CS)
Fixes computer mistakes for fairness and accuracy.
Understanding challenges to the interpretation of disaggregated evaluations of algorithmic fairness
Machine Learning (Stat)
Checks if AI treats everyone fairly.
Prompt Fairness: Sub-group Disparities in LLMs
Machine Learning (CS)
Makes AI answer questions fairly for everyone.