Score: 1

Causally Reliable Concept Bottleneck Models

Published: March 6, 2025 | arXiv ID: 2503.04363v2

By: Giovanni De Felice , Arianna Casanova Flores , Francesco De Santis and more

Potential Business Impact:

Makes AI understand why things happen, not just what.

Business Areas:
A/B Testing Data and Analytics

Concept-based models are an emerging paradigm in deep learning that constrains the inference process to operate through human-interpretable variables, facilitating explainability and human interaction. However, these architectures, on par with popular opaque neural models, fail to account for the true causal mechanisms underlying the target phenomena represented in the data. This hampers their ability to support causal reasoning tasks, limits out-of-distribution generalization, and hinders the implementation of fairness constraints. To overcome these issues, we propose Causally reliable Concept Bottleneck Models (C$^2$BMs), a class of concept-based architectures that enforce reasoning through a bottleneck of concepts structured according to a model of the real-world causal mechanisms. We also introduce a pipeline to automatically learn this structure from observational data and unstructured background knowledge (e.g., scientific literature). Experimental evidence suggests that C$^2$BMs are more interpretable, causally reliable, and improve responsiveness to interventions w.r.t. standard opaque and concept-based models, while maintaining their accuracy.

Country of Origin
🇨🇭 Switzerland

Repos / Data Links

Page Count
31 pages

Category
Computer Science:
Machine Learning (CS)