Fairness for the People, by the People: Minority Collective Action
By: Omri Ben-Dov , Samira Samadi , Amartya Sanyal and more
Potential Business Impact:
Helps minority groups fix unfair computer decisions.
Machine learning models often preserve biases present in training data, leading to unfair treatment of certain minority groups. Despite an array of existing firm-side bias mitigation techniques, they typically incur utility costs and require organizational buy-in. Recognizing that many models rely on user-contributed data, end-users can induce fairness through the framework of Algorithmic Collective Action, where a coordinated minority group strategically relabels its own data to enhance fairness, without altering the firm's training process. We propose three practical, model-agnostic methods to approximate ideal relabeling and validate them on real-world datasets. Our findings show that a subgroup of the minority can substantially reduce unfairness with a small impact on the overall prediction error.
Similar Papers
Alternative Fairness and Accuracy Optimization in Criminal Justice
Machine Learning (CS)
Helps judges make fairer decisions about people.
Alternative Fairness and Accuracy Optimization in Criminal Justice
Machine Learning (CS)
Makes AI fairer when judging people.
Alternative Fairness and Accuracy Optimization in Criminal Justice
Machine Learning (CS)
Makes AI fairer when judging people.