Personalized Interpretability -- Interactive Alignment of Prototypical Parts Networks
By: Tomasz Michalski , Adam Wróbel , Andrea Bontempelli and more
Potential Business Impact:
Makes AI understand things like you do.
Concept-based interpretable neural networks have gained significant attention due to their intuitive and easy-to-understand explanations based on case-based reasoning, such as "this bird looks like those sparrows". However, a major limitation is that these explanations may not always be comprehensible to users due to concept inconsistency, where multiple visual features are inappropriately mixed (e.g., a bird's head and wings treated as a single concept). This inconsistency breaks the alignment between model reasoning and human understanding. Furthermore, users have specific preferences for how concepts should look, yet current approaches provide no mechanism for incorporating their feedback. To address these issues, we introduce YoursProtoP, a novel interactive strategy that enables the personalization of prototypical parts - the visual concepts used by the model - according to user needs. By incorporating user supervision, YoursProtoP adapts and splits concepts used for both prediction and explanation to better match the user's preferences and understanding. Through experiments on both the synthetic FunnyBirds dataset and a real-world scenario using the CUB, CARS, and PETS datasets in a comprehensive user study, we demonstrate the effectiveness of YoursProtoP in achieving concept consistency without compromising the accuracy of the model.
Similar Papers
Interpretable Image Classification via Non-parametric Part Prototype Learning
CV and Pattern Recognition
Helps computers explain *why* they see things.
Beyond Patches: Mining Interpretable Part-Prototypes for Explainable AI
CV and Pattern Recognition
Shows how computers understand pictures.
Rashomon Sets for Prototypical-Part Networks: Editing Interpretable Models in Real-Time
CV and Pattern Recognition
Fixes AI mistakes faster without retraining.