Measuring the right thing: justifying metrics in AI impact assessments
By: Stefan Buijsman, Herman Veluwenkamp
Potential Business Impact:
Makes AI fair by explaining why we measure it.
AI Impact Assessments are only as good as the measures used to assess the impact of these systems. It is therefore paramount that we can justify our choice of metrics in these assessments, especially for difficult to quantify ethical and social values. We present a two-step approach to ensure metrics are properly motivated. First, a conception needs to be spelled out (e.g. Rawlsian fairness or fairness as solidarity) and then a metric can be fitted to that conception. Both steps require separate justifications, as conceptions can be judged on how well they fit with the function of, for example, fairness. We argue that conceptual engineering offers helpful tools for this step. Second, metrics need to be fitted to a conception. We illustrate this process through an examination of competing fairness metrics to illustrate that here the additional content that a conception offers helps us justify the choice for a specific metric. We thus advocate that impact assessments are not only clear on their metrics, but also on the conceptions that motivate those metrics.
Similar Papers
Measuring What Matters: Connecting AI Ethics Evaluations to System Attributes, Hazards, and Harms
Human-Computer Interaction
Helps AI systems avoid causing harm.
The Quest for Reliable Metrics of Responsible AI
Computers and Society
Makes AI fair and trustworthy for everyone.
Algorithmic Fairness: Not a Purely Technical but Socio-Technical Property
Machine Learning (CS)
Makes AI fair for everyone, not just groups.