Assessing High-Risk Systems: An EU AI Act Verification Framework
By: Alessio Buscemi , Tom Deckenbrunnen , Fahria Kabir and more
A central challenge in implementing the AI Act and other AI-relevant regulations in the EU is the lack of a systematic approach to verify their legal mandates. Recent surveys show that this regulatory ambiguity is perceived as a significant burden, leading to inconsistent readiness across Member States. This paper proposes a comprehensive framework designed to help close this gap by organising compliance verification along two fundamental dimensions: the type of method (controls vs. testing) and the target of assessment (data, model, processes, and final product). Additionally, our framework maps core legal requirements to concrete verification activities, serving as a vital bridge between policymakers and practitioners, and aligning legal text with technical standards and best practices. The proposed approach aims to reduce interpretive uncertainty, promote consistency in assessment practices, and support the alignment of regulatory, ethical, and technical perspectives across the AI lifecycle.
Similar Papers
An Analysis of the New EU AI Act and A Proposed Standardization Framework for Machine Learning Fairness
Computers and Society
Makes AI fair and clear for everyone.
Operationalising AI Regulatory Sandboxes under the EU AI Act: The Triple Challenge of Capacity, Coordination and Attractiveness to Providers
Computers and Society
Helps new AI pass safety tests safely.
The Sandbox Configurator: A Framework to Support Technical Assessment in AI Regulatory Sandboxes
Computers and Society
Helps test AI safely for new rules.