Score: 1

Explaining, Fast and Slow: Abstraction and Refinement of Provable Explanations

Published: June 10, 2025 | arXiv ID: 2506.08505v1

By: Shahaf Bassan , Yizhak Yisrael Elboher , Tobias Ladner and more

Potential Business Impact:

Makes AI predictions understandable and trustworthy.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Despite significant advancements in post-hoc explainability techniques for neural networks, many current methods rely on heuristics and do not provide formally provable guarantees over the explanations provided. Recent work has shown that it is possible to obtain explanations with formal guarantees by identifying subsets of input features that are sufficient to determine that predictions remain unchanged using neural network verification techniques. Despite the appeal of these explanations, their computation faces significant scalability challenges. In this work, we address this gap by proposing a novel abstraction-refinement technique for efficiently computing provably sufficient explanations of neural network predictions. Our method abstracts the original large neural network by constructing a substantially reduced network, where a sufficient explanation of the reduced network is also provably sufficient for the original network, hence significantly speeding up the verification process. If the explanation is in sufficient on the reduced network, we iteratively refine the network size by gradually increasing it until convergence. Our experiments demonstrate that our approach enhances the efficiency of obtaining provably sufficient explanations for neural network predictions while additionally providing a fine-grained interpretation of the network's predictions across different abstraction levels.

Country of Origin
🇮🇱 🇩🇪 Israel, Germany

Page Count
24 pages

Category
Computer Science:
Machine Learning (CS)