Graph Concept Bottleneck Models
By: Haotian Xu , Tsui-Wei Weng , Lam M. Nguyen and more
Potential Business Impact:
Shows how ideas connect to understand pictures.
Concept Bottleneck Models (CBMs) provide explicit interpretations for deep neural networks through concepts and allow intervention with concepts to adjust final predictions. Existing CBMs assume concepts are conditionally independent given labels and isolated from each other, ignoring the hidden relationships among concepts. However, the set of concepts in CBMs often has an intrinsic structure where concepts are generally correlated: changing one concept will inherently impact its related concepts. To mitigate this limitation, we propose GraphCBMs: a new variant of CBM that facilitates concept relationships by constructing latent concept graphs, which can be combined with CBMs to enhance model performance while retaining their interpretability. Our experiment results on real-world image classification tasks demonstrate Graph CBMs offer the following benefits: (1) superior in image classification tasks while providing more concept structure information for interpretability; (2) able to utilize latent concept graphs for more effective interventions; and (3) robust in performance across different training and architecture settings.
Similar Papers
There Was Never a Bottleneck in Concept Bottleneck Models
Machine Learning (CS)
Makes AI explain its decisions clearly.
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Machine Learning (CS)
Lets computers learn from pictures without experts.
Flexible Concept Bottleneck Model
CV and Pattern Recognition
Lets AI learn new things without full retraining.