Alternative Fairness and Accuracy Optimization in Criminal Justice
By: Shaolong Wu, James Blume, Geshi Yeung
Potential Business Impact:
Makes AI fairer when judging people.
Algorithmic fairness has grown rapidly as a research area, yet key concepts remain unsettled, especially in criminal justice. We review group, individual, and process fairness and map the conditions under which they conflict. We then develop a simple modification to standard group fairness. Rather than exact parity across protected groups, we minimize a weighted error loss while keeping differences in false negative rates within a small tolerance. This makes solutions easier to find, can raise predictive accuracy, and surfaces the ethical choice of error costs. We situate this proposal within three classes of critique: biased and incomplete data, latent affirmative action, and the explosion of subgroup constraints. Finally, we offer a practical framework for deployment in public decision systems built on three pillars: need-based decisions, Transparency and accountability, and narrowly tailored definitions and solutions. Together, these elements link technical design to legitimacy and provide actionable guidance for agencies that use risk assessment and related tools.
Similar Papers
Alternative Fairness and Accuracy Optimization in Criminal Justice
Machine Learning (CS)
Makes AI fairer when judging people.
Alternative Fairness and Accuracy Optimization in Criminal Justice
Machine Learning (CS)
Helps judges make fairer decisions about people.
Fairness for the People, by the People: Minority Collective Action
Machine Learning (CS)
Helps minority groups fix unfair computer decisions.