Score: 0

Logic Explanation of AI Classifiers by Categorical Explaining Functors

Published: March 20, 2025 | arXiv ID: 2503.16203v1

By: Stefano Fioravanti , Francesco Giannini , Paolo Frazzetto and more

Potential Business Impact:

Makes AI's decisions understandable and trustworthy.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The most common methods in explainable artificial intelligence are post-hoc techniques which identify the most relevant features used by pretrained opaque models. Some of the most advanced post hoc methods can generate explanations that account for the mutual interactions of input features in the form of logic rules. However, these methods frequently fail to guarantee the consistency of the extracted explanations with the model's underlying reasoning. To bridge this gap, we propose a theoretically grounded approach to ensure coherence and fidelity of the extracted explanations, moving beyond the limitations of current heuristic-based approaches. To this end, drawing from category theory, we introduce an explaining functor which structurally preserves logical entailment between the explanation and the opaque model's reasoning. As a proof of concept, we validate the proposed theoretical constructions on a synthetic benchmark verifying how the proposed approach significantly mitigates the generation of contradictory or unfaithful explanations.

Page Count
13 pages

Category
Computer Science:
Artificial Intelligence