Justified Evidence Collection for Argument-based AI Fairness Assurance
By: Alpay Sabuncuoglu, Christopher Burr, Carsten Maple
Potential Business Impact:
Makes AI fair by checking its work.
It is well recognised that ensuring fair AI systems is a complex sociotechnical challenge, which requires careful deliberation and continuous oversight across all stages of a system's lifecycle, from defining requirements to model deployment and deprovisioning. Dynamic argument-based assurance cases, which present structured arguments supported by evidence, have emerged as a systematic approach to evaluating and mitigating safety risks and hazards in AI-enabled system development and have also been extended to deal with broader normative goals such as fairness and explainability. This paper introduces a systems-engineering-driven framework, supported by software tooling, to operationalise a dynamic approach to argument-based assurance in two stages. In the first stage, during the requirements planning phase, a multi-disciplinary and multi-stakeholder team define goals and claims to be established (and evidenced) by conducting a comprehensive fairness governance process. In the second stage, a continuous monitoring interface gathers evidence from existing artefacts (e.g. metrics from automated tests), such as model, data, and use case documentation, to support these arguments dynamically. The framework's effectiveness is demonstrated through an illustrative case study in finance, with a focus on supporting fairness-related arguments.
Similar Papers
A Framework for the Assurance of AI-Enabled Systems
Artificial Intelligence
Makes military AI safe and trustworthy for use.
Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives
Artificial Intelligence
Explains why AI makes legal decisions.
Toward a Harmonized Approach -- Requirement-based Structuring of a Safety Assurance Argumentation for Automated Vehicles
Systems and Control
Makes self-driving cars safer for everyone.