Interpretable Image Classification via Non-parametric Part Prototype Learning
By: Zhijie Zhu , Lei Fan , Maurice Pagnucco and more
Potential Business Impact:
Helps computers explain *why* they see things.
Classifying images with an interpretable decision-making process is a long-standing problem in computer vision. In recent years, Prototypical Part Networks has gained traction as an approach for self-explainable neural networks, due to their ability to mimic human visual reasoning by providing explanations based on prototypical object parts. However, the quality of the explanations generated by these methods leaves room for improvement, as the prototypes usually focus on repetitive and redundant concepts. Leveraging recent advances in prototype learning, we present a framework for part-based interpretable image classification that learns a set of semantically distinctive object parts for each class, and provides diverse and comprehensive explanations. The core of our method is to learn the part-prototypes in a non-parametric fashion, through clustering deep features extracted from foundation vision models that encode robust semantic information. To quantitatively evaluate the quality of explanations provided by ProtoPNets, we introduce Distinctiveness Score and Comprehensiveness Score. Through evaluation on CUB-200-2011, Stanford Cars and Stanford Dogs datasets, we show that our framework compares favourably against existing ProtoPNets while achieving better interpretability. Code is available at: https://github.com/zijizhu/proto-non-param.
Similar Papers
Beyond Patches: Mining Interpretable Part-Prototypes for Explainable AI
CV and Pattern Recognition
Shows how computers understand pictures.
Personalized Interpretability -- Interactive Alignment of Prototypical Parts Networks
CV and Pattern Recognition
Makes AI understand things like you do.
Rashomon Sets for Prototypical-Part Networks: Editing Interpretable Models in Real-Time
CV and Pattern Recognition
Fixes AI mistakes faster without retraining.