Toward Unifying Group Fairness Evaluation from a Sparsity Perspective
By: Zhecheng Sheng, Jiawei Zhang, Enmao Diao
Potential Business Impact:
Makes computer decisions fairer for everyone.
Ensuring algorithmic fairness remains a significant challenge in machine learning, particularly as models are increasingly applied across diverse domains. While numerous fairness criteria exist, they often lack generalizability across different machine learning problems. This paper examines the connections and differences among various sparsity measures in promoting fairness and proposes a unified sparsity-based framework for evaluating algorithmic fairness. The framework aligns with existing fairness criteria and demonstrates broad applicability to a wide range of machine learning tasks. We demonstrate the effectiveness of the proposed framework as an evaluation metric through extensive experiments on a variety of datasets and bias mitigation methods. This work provides a novel perspective to algorithmic fairness by framing it through the lens of sparsity and social equity, offering potential for broader impact on fairness research and applications.
Similar Papers
A Unifying Human-Centered AI Fairness Framework
Machine Learning (CS)
Helps AI treat everyone fairly, no matter what.
Algorithmic Fairness: Not a Purely Technical but Socio-Technical Property
Machine Learning (CS)
Makes AI fair for everyone, not just groups.
Happiness as a Measure of Fairness
Machine Learning (CS)
Makes computer decisions fairer and happier for everyone.