Community Detection on Model Explanation Graphs for Explainable AI
By: Ehsan Moradi
Potential Business Impact:
Finds groups of clues that help computers decide.
Feature-attribution methods (e.g., SHAP, LIME) explain individual predictions but often miss higher-order structure: sets of features that act in concert. We propose Modules of Influence (MoI), a framework that (i) constructs a model explanation graph from per-instance attributions, (ii) applies community detection to find feature modules that jointly affect predictions, and (iii) quantifies how these modules relate to bias, redundancy, and causality patterns. Across synthetic and real datasets, MoI uncovers correlated feature groups, improves model debugging via module-level ablations, and localizes bias exposure to specific modules. We release stability and synergy metrics, a reference implementation, and evaluation protocols to benchmark module discovery in XAI.
Similar Papers
Chain-of-Influence: Tracing Interdependencies Across Time and Features in Clinical Predictive Modelings
Machine Learning (CS)
Shows how sickness spreads in a patient's body.
MoMoE: Mixture of Moderation Experts Framework for AI-Assisted Online Governance
Computation and Language
Helps online sites remove bad posts better.
Consistency of Feature Attribution in Deep Learning Architectures for Multi-Omics
Machine Learning (Stat)
Finds important body parts for diseases.