Uncovering Fairness through Data Complexity as an Early Indicator
By: Juliett Suárez Ferreira, Marija Slavkovik, Jorge Casillas
Potential Business Impact:
Finds hidden unfairness in computer decisions.
Fairness constitutes a concern within machine learning (ML) applications. Currently, there is no study on how disparities in classification complexity between privileged and unprivileged groups could influence the fairness of solutions, which serves as a preliminary indicator of potential unfairness. In this work, we investigate this gap, specifically, we focus on synthetic datasets designed to capture a variety of biases ranging from historical bias to measurement and representational bias to evaluate how various complexity metrics differences correlate with group fairness metrics. We then apply association rule mining to identify patterns that link disproportionate complexity differences between groups with fairness-related outcomes, offering data-centric indicators to guide bias mitigation. Our findings are also validated by their application in real-world problems, providing evidence that quantifying group-wise classification complexity can uncover early indicators of potential fairness challenges. This investigation helps practitioners to proactively address bias in classification tasks.
Similar Papers
Beyond Internal Data: Constructing Complete Datasets for Fairness Testing
Machine Learning (CS)
Tests AI for fairness without private data.
Software Fairness Dilemma: Is Bias Mitigation a Zero-Sum Game?
Machine Learning (CS)
Makes AI fairer without hurting anyone's performance.
When Fairness Isn't Statistical: The Limits of Machine Learning in Evaluating Legal Reasoning
Computation and Language
Shows computer fairness checks fail in law.