On the Complexities of Testing for Compliance with Human Oversight Requirements in AI Regulation
By: Markus Langer, Veronika Lazar, Kevin Baum
Potential Business Impact:
Tests if AI systems have enough human help.
Human oversight requirements are a core component of the European AI Act and in AI governance. In this paper, we highlight key challenges in testing for compliance with these requirements. A central difficulty lies in balancing simple, but potentially ineffective checklist-based approaches with resource-intensive and context-sensitive empirical testing of the effectiveness of human oversight of AI. Questions regarding when to update compliance testing, the context-dependent nature of human oversight requirements, and difficult-to-operationalize standards further complicate compliance testing. We argue that these challenges illustrate broader challenges in the future of sociotechnical AI governance, i.e. a future that shifts from ensuring good technological products to good sociotechnical systems.
Similar Papers
Beyond Procedural Compliance: Human Oversight as a Dimension of Well-being Efficacy in AI Governance
Computers and Society
Teaches people to control AI safely.
Compliance of AI Systems
Computers and Society
Makes AI systems follow laws for fairness.
Assessing High-Risk Systems: An EU AI Act Verification Framework
Computers and Society
Helps check if AI follows the law.