Towards more holistic interpretability: A lightweight disentangled Concept Bottleneck Model
By: Gaoxiang Huang, Songning Lai, Yutao Yue
Potential Business Impact:
Helps computers explain *why* they make decisions.
Concept Bottleneck Models (CBMs) enhance interpretability by predicting human-understandable concepts as intermediate representations. However, existing CBMs often suffer from input-to-concept mapping bias and limited controllability, which restricts their practical value, directly damage the responsibility of strategy from concept-based methods. We propose a lightweight Disentangled Concept Bottleneck Model (LDCBM) that automatically groups visual features into semantically meaningful components without region annotation. By introducing a filter grouping loss and joint concept supervision, our method improves the alignment between visual patterns and concepts, enabling more transparent and robust decision-making. Notably, Experiments on three diverse datasets demonstrate that LDCBM achieves higher concept and class accuracy, outperforming previous CBMs in both interpretability and classification performance. By grounding concepts in visual evidence, our method overcomes a fundamental limitation of prior models and enhances the reliability of interpretable AI.
Similar Papers
Locality-aware Concept Bottleneck Model
CV and Pattern Recognition
Teaches computers to find and use visual clues.
Partially Shared Concept Bottleneck Models
CV and Pattern Recognition
Makes AI explain its decisions clearly and accurately.
Flexible Concept Bottleneck Model
CV and Pattern Recognition
Lets AI learn new things without full retraining.