Score: 0

Towards more holistic interpretability: A lightweight disentangled Concept Bottleneck Model

Published: October 17, 2025 | arXiv ID: 2510.15770v1

By: Gaoxiang Huang, Songning Lai, Yutao Yue

Potential Business Impact:

Helps computers explain *why* they make decisions.

Business Areas:
Image Recognition Data and Analytics, Software

Concept Bottleneck Models (CBMs) enhance interpretability by predicting human-understandable concepts as intermediate representations. However, existing CBMs often suffer from input-to-concept mapping bias and limited controllability, which restricts their practical value, directly damage the responsibility of strategy from concept-based methods. We propose a lightweight Disentangled Concept Bottleneck Model (LDCBM) that automatically groups visual features into semantically meaningful components without region annotation. By introducing a filter grouping loss and joint concept supervision, our method improves the alignment between visual patterns and concepts, enabling more transparent and robust decision-making. Notably, Experiments on three diverse datasets demonstrate that LDCBM achieves higher concept and class accuracy, outperforming previous CBMs in both interpretability and classification performance. By grounding concepts in visual evidence, our method overcomes a fundamental limitation of prior models and enhances the reliability of interpretable AI.

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition