AI Fairness Beyond Complete Demographics: Current Achievements and Future Directions
By: Zichong Wang , Zhipeng Yin , Roland H. C. Yap and more
Potential Business Impact:
Makes AI fair even with missing information.
Fairness in artificial intelligence (AI) has become a growing concern due to discriminatory outcomes in AI-based decision-making systems. While various methods have been proposed to mitigate bias, most rely on complete demographic information, an assumption often impractical due to legal constraints and the risk of reinforcing discrimination. This survey examines fairness in AI when demographics are incomplete, addressing the gap between traditional approaches and real-world challenges. We introduce a novel taxonomy of fairness notions in this setting, clarifying their relationships and distinctions. Additionally, we summarize existing techniques that promote fairness beyond complete demographics and highlight open research questions to encourage further progress in the field.
Similar Papers
Beyond Internal Data: Constructing Complete Datasets for Fairness Testing
Machine Learning (CS)
Tests AI for fairness without private data.
Beyond Internal Data: Bounding and Estimating Fairness from Incomplete Data
Machine Learning (CS)
Tests AI fairness using separate data sources.
Algorithmic Fairness: Not a Purely Technical but Socio-Technical Property
Machine Learning (CS)
Makes AI fair for everyone, not just groups.