How May U.S. Courts Scrutinize Their Recidivism Risk Assessment Tools? Contextualizing AI Fairness Criteria on a Judicial Scrutiny-based Framework
By: Tin Nguyen , Jiannan Xu , Phuong-Anh Nguyen-Le and more
Potential Business Impact:
Makes AI fairer for everyone, everywhere.
The AI/HCI and legal communities have developed largely independent conceptualizations of fairness. This conceptual difference hinders the potential incorporation of technical fairness criteria (e.g., procedural, group, and individual fairness) into sustainable policies and designs, particularly for high-stakes applications like recidivism risk assessment. To foster common ground, we conduct legal research to identify if and how technical AI conceptualizations of fairness surface in primary legal sources. We find that while major technical fairness criteria can be linked to constitutional mandates such as ``Due Process'' and ``Equal Protection'' thanks to judicial interpretation, several challenges arise when operationalizing them into concrete statutes/regulations. These policies often adopt procedural and group fairness but ignore the major technical criterion of individual fairness. Regarding procedural fairness, judicial ``scrutiny'' categories are relevant but may not fully capture how courts scrutinize the use of demographic features in potentially discriminatory government tools like RRA. Furthermore, some policies contradict each other on whether to apply procedural fairness to certain demographic features. Thus, we propose a new framework, integrating U.S. demographics-related legal scrutiny concepts and technical fairness criteria, and contextualize it in three other major AI-adopting jurisdictions (EU, China, and India).
Similar Papers
Which Demographic Features Are Relevant for Individual Fairness Evaluation of U.S. Recidivism Risk Assessment Tools?
Computers and Society
Makes risk tools fair by ignoring race.
Alternative Fairness and Accuracy Optimization in Criminal Justice
Machine Learning (CS)
Helps judges make fairer decisions about people.
A Gray Literature Study on Fairness Requirements in AI-enabled Software Engineering
Software Engineering
Makes AI fair, not just smart.