Locality-aware Concept Bottleneck Model
By: Sujin Jeon , Hyundo Lee , Eungseo Kim and more
Potential Business Impact:
Teaches computers to find and use visual clues.
Concept bottleneck models (CBMs) are inherently interpretable models that make predictions based on human-understandable visual cues, referred to as concepts. As obtaining dense concept annotations with human labeling is demanding and costly, recent approaches utilize foundation models to determine the concepts existing in the images. However, such label-free CBMs often fail to localize concepts in relevant regions, attending to visually unrelated regions when predicting concept presence. To this end, we propose a framework, coined Locality-aware Concept Bottleneck Model (LCBM), which utilizes rich information from foundation models and adopts prototype learning to ensure accurate spatial localization of the concepts. Specifically, we assign one prototype to each concept, promoted to represent a prototypical image feature of that concept. These prototypes are learned by encouraging them to encode similar local regions, leveraging foundation models to assure the relevance of each prototype to its associated concept. Then we use the prototypes to facilitate the learning process of identifying the proper local region from which each concept should be predicted. Experimental results demonstrate that LCBM effectively identifies present concepts in the images and exhibits improved localization while maintaining comparable classification performance.
Similar Papers
Towards more holistic interpretability: A lightweight disentangled Concept Bottleneck Model
CV and Pattern Recognition
Helps computers explain *why* they make decisions.
Partially Shared Concept Bottleneck Models
CV and Pattern Recognition
Makes AI explain its decisions clearly and accurately.
Flexible Concept Bottleneck Model
CV and Pattern Recognition
Lets AI learn new things without full retraining.