Score: 0

An Information-Flow Perspective on Explainability Requirements: Specification and Verification

Published: September 1, 2025 | arXiv ID: 2509.01479v1

By: Bernd Finkbeiner, Hadar Frenkel, Julian Siber

Potential Business Impact:

Makes computers explain their decisions to people.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Explainable systems expose information about why certain observed effects are happening to the agents interacting with them. We argue that this constitutes a positive flow of information that needs to be specified, verified, and balanced against negative information flow that may, e.g., violate privacy guarantees. Since both explainability and privacy require reasoning about knowledge, we tackle these tasks with epistemic temporal logic extended with quantification over counterfactual causes. This allows us to specify that a multi-agent system exposes enough information such that agents acquire knowledge on why some effect occurred. We show how this principle can be used to specify explainability as a system-level requirement and provide an algorithm for checking finite-state models against such specifications. We present a prototype implementation of the algorithm and evaluate it on several benchmarks, illustrating how our approach distinguishes between explainable and unexplainable systems, and how it allows to pose additional privacy requirements.

Country of Origin
🇮🇱 Israel

Page Count
13 pages

Category
Computer Science:
Logic in Computer Science