Beyond Patches: Mining Interpretable Part-Prototypes for Explainable AI
By: Mahdi Alehdaghi , Rajarshi Bhattacharya , Pourya Shamsolmoali and more
Potential Business Impact:
Shows how computers understand pictures.
Deep learning has provided considerable advancements for multimedia systems, yet the interpretability of deep models remains a challenge. State-of-the-art post-hoc explainability methods, such as GradCAM, provide visual interpretation based on heatmaps but lack conceptual clarity. Prototype-based approaches, like ProtoPNet and PIPNet, offer a more structured explanation but rely on fixed patches, limiting their robustness and semantic consistency. To address these limitations, a part-prototypical concept mining network (PCMNet) is proposed that dynamically learns interpretable prototypes from meaningful regions. PCMNet clusters prototypes into concept groups, creating semantically grounded explanations without requiring additional annotations. Through a joint process of unsupervised part discovery and concept activation vector extraction, PCMNet effectively captures discriminative concepts and makes interpretable classification decisions. Our extensive experiments comparing PCMNet against state-of-the-art methods on multiple datasets show that it can provide a high level of interpretability, stability, and robustness under clean and occluded scenarios.
Similar Papers
Interpretable Image Classification via Non-parametric Part Prototype Learning
CV and Pattern Recognition
Helps computers explain *why* they see things.
Personalized Interpretability -- Interactive Alignment of Prototypical Parts Networks
CV and Pattern Recognition
Makes AI understand things like you do.
CIP-Net: Continual Interpretable Prototype-based Network
Machine Learning (CS)
Keeps AI smart without forgetting old lessons.