"I think this is fair'': Uncovering the Complexities of Stakeholder Decision-Making in AI Fairness Assessment
By: Lin Luo , Yuri Nakao , Mathieu Chollet and more
Potential Business Impact:
Lets regular people decide what's fair for AI.
Assessing fairness in artificial intelligence (AI) typically involves AI experts who select protected features, fairness metrics, and set fairness thresholds. However, little is known about how stakeholders, particularly those affected by AI outcomes but lacking AI expertise, assess fairness. To address this gap, we conducted a qualitative study with 30 stakeholders without AI expertise, representing potential decision subjects in a credit rating scenario, to examine how they assess fairness when placed in the role of deciding on features with priority, metrics, and thresholds. We reveal that stakeholders' fairness decisions are more complex than typical AI expert practices: they considered features far beyond legally protected features, tailored metrics for specific contexts, set diverse yet stricter fairness thresholds, and even preferred designing customized fairness. Our results extend the understanding of how stakeholders can meaningfully contribute to AI fairness governance and mitigation, underscoring the importance of incorporating stakeholders' nuanced fairness judgments.
Similar Papers
Mapping Stakeholder Needs to Multi-Sided Fairness in Candidate Recommendation for Algorithmic Hiring
Computers and Society
Helps job apps treat everyone fairly.
Disclosure and Evaluation as Fairness Interventions for General-Purpose AI
Computers and Society
Helps AI be fair in different situations.
A Methodological Framework and Questionnaire for Investigating Perceived Algorithmic Fairness
Human-Computer Interaction
Shows how people in Bangladesh think AI is fair.