Logic Explanation of AI Classifiers by Categorical Explaining Functors
By: Stefano Fioravanti , Francesco Giannini , Paolo Frazzetto and more
Potential Business Impact:
Makes AI's decisions understandable and trustworthy.
The most common methods in explainable artificial intelligence are post-hoc techniques which identify the most relevant features used by pretrained opaque models. Some of the most advanced post hoc methods can generate explanations that account for the mutual interactions of input features in the form of logic rules. However, these methods frequently fail to guarantee the consistency of the extracted explanations with the model's underlying reasoning. To bridge this gap, we propose a theoretically grounded approach to ensure coherence and fidelity of the extracted explanations, moving beyond the limitations of current heuristic-based approaches. To this end, drawing from category theory, we introduce an explaining functor which structurally preserves logical entailment between the explanation and the opaque model's reasoning. As a proof of concept, we validate the proposed theoretical constructions on a synthetic benchmark verifying how the proposed approach significantly mitigates the generation of contradictory or unfaithful explanations.
Similar Papers
Logic-Based Artificial Intelligence Algorithms Supporting Categorical Semantics
Artificial Intelligence
Helps computers think about complex things better.
Explanations as Bias Detectors: A Critical Study of Local Post-hoc XAI Methods for Fairness Exploration
Artificial Intelligence
Finds unfairness in AI, making it more just.
Rethinking Explainability in the Era of Multimodal AI
Artificial Intelligence
Explains how different data types work together.