Score: 3

Flexible Concept Bottleneck Model

Published: November 10, 2025 | arXiv ID: 2511.06678v1

By: Xingbo Du , Qiantong Dou , Lei Fan and more

Potential Business Impact:

Lets AI learn new things without full retraining.

Business Areas:
Image Recognition Data and Analytics, Software

Concept bottleneck models (CBMs) improve neural network interpretability by introducing an intermediate layer that maps human-understandable concepts to predictions. Recent work has explored the use of vision-language models (VLMs) to automate concept selection and annotation. However, existing VLM-based CBMs typically require full model retraining when new concepts are involved, which limits their adaptability and flexibility in real-world scenarios, especially considering the rapid evolution of vision-language foundation models. To address these issues, we propose Flexible Concept Bottleneck Model (FCBM), which supports dynamic concept adaptation, including complete replacement of the original concept set. Specifically, we design a hypernetwork that generates prediction weights based on concept embeddings, allowing seamless integration of new concepts without retraining the entire model. In addition, we introduce a modified sparsemax module with a learnable temperature parameter that dynamically selects the most relevant concepts, enabling the model to focus on the most informative features. Extensive experiments on five public benchmarks demonstrate that our method achieves accuracy comparable to state-of-the-art baselines with a similar number of effective concepts. Moreover, the model generalizes well to unseen concepts with just a single epoch of fine-tuning, demonstrating its strong adaptability and flexibility.

Country of Origin
🇦🇪 🇦🇺 🇨🇳 China, Australia, United Arab Emirates

Repos / Data Links

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition