The AI Fairness Myth: A Position Paper on Context-Aware Bias
By: Kessia Nepomuceno, Fabio Petrillo
Potential Business Impact:
Makes AI treat people fairly, even if it means helping some.
Defining fairness in AI remains a persistent challenge, largely due to its deeply context-dependent nature and the lack of a universal definition. While numerous mathematical formulations of fairness exist, they sometimes conflict with one another and diverge from social, economic, and legal understandings of justice. Traditional quantitative definitions primarily focus on statistical comparisons, but they often fail to simultaneously satisfy multiple fairness constraints. Drawing on philosophical theories (Rawls' Difference Principle and Dworkin's theory of equality) and empirical evidence supporting affirmative action, we argue that fairness sometimes necessitates deliberate, context-aware preferential treatment of historically marginalized groups. Rather than viewing bias solely as a flaw to eliminate, we propose a framework that embraces corrective, intentional biases to promote genuine equality of opportunity. Our approach involves identifying unfairness, recognizing protected groups/individuals, applying corrective strategies, measuring impact, and iterating improvements. By bridging mathematical precision with ethical and contextual considerations, we advocate for an AI fairness paradigm that goes beyond neutrality to actively advance social justice.
Similar Papers
Algorithmic Fairness: Not a Purely Technical but Socio-Technical Property
Machine Learning (CS)
Makes AI fair for everyone, not just groups.
A Unifying Human-Centered AI Fairness Framework
Machine Learning (CS)
Helps AI treat everyone fairly, no matter what.
Contextual Fairness-Aware Practices in ML: A Cost-Effective Empirical Evaluation
Software Engineering
Makes AI fairer and more trustworthy.