Score: 0

Personalized Interpretability -- Interactive Alignment of Prototypical Parts Networks

Published: June 5, 2025 | arXiv ID: 2506.05533v1

By: Tomasz Michalski , Adam Wróbel , Andrea Bontempelli and more

Potential Business Impact:

Makes AI understand things like you do.

Business Areas:
Semantic Search Internet Services

Concept-based interpretable neural networks have gained significant attention due to their intuitive and easy-to-understand explanations based on case-based reasoning, such as "this bird looks like those sparrows". However, a major limitation is that these explanations may not always be comprehensible to users due to concept inconsistency, where multiple visual features are inappropriately mixed (e.g., a bird's head and wings treated as a single concept). This inconsistency breaks the alignment between model reasoning and human understanding. Furthermore, users have specific preferences for how concepts should look, yet current approaches provide no mechanism for incorporating their feedback. To address these issues, we introduce YoursProtoP, a novel interactive strategy that enables the personalization of prototypical parts - the visual concepts used by the model - according to user needs. By incorporating user supervision, YoursProtoP adapts and splits concepts used for both prediction and explanation to better match the user's preferences and understanding. Through experiments on both the synthetic FunnyBirds dataset and a real-world scenario using the CUB, CARS, and PETS datasets in a comprehensive user study, we demonstrate the effectiveness of YoursProtoP in achieving concept consistency without compromising the accuracy of the model.

Country of Origin
🇵🇱 Poland

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition