Self-Certification of High-Risk AI Systems: The Example of AI-based Facial Emotion Recognition
By: Gregor Autischer, Kerstin Waxnegger, Dominik Kowald
The European Union's Artificial Intelligence Act establishes comprehensive requirements for high-risk AI systems, yet the harmonized standards necessary for demonstrating compliance remain not fully developed. In this paper, we investigate the practical application of the Fraunhofer AI assessment catalogue as a certification framework through a complete self-certification cycle of an AI-based facial emotion recognition system. Beginning with a baseline model that has deficiencies, including inadequate demographic representation and prediction uncertainty, we document an enhancement process guided by AI certification requirements. The enhanced system achieves higher accuracy with improved reliability metrics and comprehensive fairness across demographic groups. We focused our assessment on two of the six Fraunhofer catalogue dimensions, reliability and fairness, the enhanced system successfully satisfies the certification criteria for these examined dimensions. We find that the certification framework provides value as a proactive development tool, driving concrete technical improvements and generating documentation naturally through integration into the development process. However, fundamental gaps separate structured self-certification from legal compliance: harmonized European standards are not fully available, and AI assessment frameworks and catalogues cannot substitute for them on their own. These findings establish the Fraunhofer AI assessment catalogue as a valuable preparatory tool that complements rather than replaces formal compliance requirements at this time.
Similar Papers
Assessing High-Risk Systems: An EU AI Act Verification Framework
Computers and Society
Helps check if AI follows the law.
Practical Application and Limitations of AI Certification Catalogues in the Light of the AI Act
Computers and Society
Tests AI to make sure it's safe and fair.
Safe and Certifiable AI Systems: Concepts, Challenges, and Lessons Learned
Computers and Society
Certifies AI systems are safe and fair.