Controllable Concept Bottleneck Models
By: Hongbin Lin , Chenyang Ren , Juangui Xu and more
Potential Business Impact:
Makes smart models easily fixable and updatable.
Concept Bottleneck Models (CBMs) have garnered much attention for their ability to elucidate the prediction process through a human-understandable concept layer. However, most previous studies focused on static scenarios where the data and concepts are assumed to be fixed and clean. In real-world applications, deployed models require continuous maintenance: we often need to remove erroneous or sensitive data (unlearning), correct mislabeled concepts, or incorporate newly acquired samples (incremental learning) to adapt to evolving environments. Thus, deriving efficient editable CBMs without retraining from scratch remains a significant challenge, particularly in large-scale applications. To address these challenges, we propose Controllable Concept Bottleneck Models (CCBMs). Specifically, CCBMs support three granularities of model editing: concept-label-level, concept-level, and data-level, the latter of which encompasses both data removal and data addition. CCBMs enjoy mathematically rigorous closed-form approximations derived from influence functions that obviate the need for retraining. Experimental results demonstrate the efficiency and adaptability of our CCBMs, affirming their practical value in enabling dynamic and trustworthy CBMs.
Similar Papers
Graph Concept Bottleneck Models
Machine Learning (CS)
Shows how ideas connect to understand pictures.
Process-Guided Concept Bottleneck Model
Machine Learning (CS)
Lets AI understand science by following rules.
Sample-efficient Learning of Concepts with Theoretical Guarantees: from Data to Concepts without Interventions
Machine Learning (Stat)
Teaches computers to learn the right reasons.