Score: 0

Fairness for the People, by the People: Minority Collective Action

Published: August 21, 2025 | arXiv ID: 2508.15374v1

By: Omri Ben-Dov , Samira Samadi , Amartya Sanyal and more

Potential Business Impact:

Helps minority groups fix unfair computer decisions.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Machine learning models often preserve biases present in training data, leading to unfair treatment of certain minority groups. Despite an array of existing firm-side bias mitigation techniques, they typically incur utility costs and require organizational buy-in. Recognizing that many models rely on user-contributed data, end-users can induce fairness through the framework of Algorithmic Collective Action, where a coordinated minority group strategically relabels its own data to enhance fairness, without altering the firm's training process. We propose three practical, model-agnostic methods to approximate ideal relabeling and validate them on real-world datasets. Our findings show that a subgroup of the minority can substantially reduce unfairness with a small impact on the overall prediction error.

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)