Score: 0

Explainability Requirements as Hyperproperties

Published: October 18, 2025 | arXiv ID: 2510.16402v1

By: Bernd Finkbeiner, Julian Siber

Potential Business Impact:

Makes AI explain its decisions to people.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Explainability is emerging as a key requirement for autonomous systems. While many works have focused on what constitutes a valid explanation, few have considered formalizing explainability as a system property. In this work, we approach this problem from the perspective of hyperproperties. We start with a combination of three prominent flavors of modal logic and show how they can be used for specifying and verifying counterfactual explainability in multi-agent systems: With Lewis' counterfactuals, linear-time temporal logic, and a knowledge modality, we can reason about whether agents know why a specific observation occurs, i.e., whether that observation is explainable to them. We use this logic to formalize multiple notions of explainability on the system level. We then show how this logic can be embedded into a hyperlogic. Notably, from this analysis we conclude that the model-checking problem of our logic is decidable, which paves the way for the automated verification of explainability requirements.

Page Count
27 pages

Category
Computer Science:
Logic in Computer Science