If Concept Bottlenecks are the Question, are Foundation Models the Answer?
By: Nicola Debole , Pietro Barbiero , Francesco Giannini and more
Potential Business Impact:
Lets computers learn from pictures without experts.
Concept Bottleneck Models (CBMs) are neural networks designed to conjoin high performance with ante-hoc interpretability. CBMs work by first mapping inputs (e.g., images) to high-level concepts (e.g., visible objects and their properties) and then use these to solve a downstream task (e.g., tagging or scoring an image) in an interpretable manner. Their performance and interpretability, however, hinge on the quality of the concepts they learn. The go-to strategy for ensuring good quality concepts is to leverage expert annotations, which are expensive to collect and seldom available in applications. Researchers have recently addressed this issue by introducing "VLM-CBM" architectures that replace manual annotations with weak supervision from foundation models. It is however unclear what is the impact of doing so on the quality of the learned concepts. To answer this question, we put state-of-the-art VLM-CBMs to the test, analyzing their learned concepts empirically using a selection of significant metrics. Our results show that, depending on the task, VLM supervision can sensibly differ from expert annotations, and that concept accuracy and quality are not strongly correlated. Our code is available at https://github.com/debryu/CQA.
Similar Papers
Flexible Concept Bottleneck Model
CV and Pattern Recognition
Lets AI learn new things without full retraining.
Locality-aware Concept Bottleneck Model
CV and Pattern Recognition
Teaches computers to find and use visual clues.
Graph Concept Bottleneck Models
Machine Learning (CS)
Shows how ideas connect to understand pictures.