Causally Reliable Concept Bottleneck Models
By: Giovanni De Felice , Arianna Casanova Flores , Francesco De Santis and more
Potential Business Impact:
Makes AI understand why things happen, not just what.
Concept-based models are an emerging paradigm in deep learning that constrains the inference process to operate through human-interpretable variables, facilitating explainability and human interaction. However, these architectures, on par with popular opaque neural models, fail to account for the true causal mechanisms underlying the target phenomena represented in the data. This hampers their ability to support causal reasoning tasks, limits out-of-distribution generalization, and hinders the implementation of fairness constraints. To overcome these issues, we propose Causally reliable Concept Bottleneck Models (C$^2$BMs), a class of concept-based architectures that enforce reasoning through a bottleneck of concepts structured according to a model of the real-world causal mechanisms. We also introduce a pipeline to automatically learn this structure from observational data and unstructured background knowledge (e.g., scientific literature). Experimental evidence suggests that C$^2$BMs are more interpretable, causally reliable, and improve responsiveness to interventions w.r.t. standard opaque and concept-based models, while maintaining their accuracy.
Similar Papers
Graph Concept Bottleneck Models
Machine Learning (CS)
Shows how ideas connect to understand pictures.
There Was Never a Bottleneck in Concept Bottleneck Models
Machine Learning (CS)
Makes AI explain its decisions clearly.
Towards Reasonable Concept Bottleneck Models
Machine Learning (CS)
Teaches computers to think using clear ideas.