Score: 0

Community Detection on Model Explanation Graphs for Explainable AI

Published: October 31, 2025 | arXiv ID: 2510.27655v1

By: Ehsan Moradi

Potential Business Impact:

Finds groups of clues that help computers decide.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Feature-attribution methods (e.g., SHAP, LIME) explain individual predictions but often miss higher-order structure: sets of features that act in concert. We propose Modules of Influence (MoI), a framework that (i) constructs a model explanation graph from per-instance attributions, (ii) applies community detection to find feature modules that jointly affect predictions, and (iii) quantifies how these modules relate to bias, redundancy, and causality patterns. Across synthetic and real datasets, MoI uncovers correlated feature groups, improves model debugging via module-level ablations, and localizes bias exposure to specific modules. We release stability and synergy metrics, a reference implementation, and evaluation protocols to benchmark module discovery in XAI.

Page Count
14 pages

Category
Computer Science:
Social and Information Networks