Beyond the Black Box: Identifiable Interpretation and Control in Generative Models via Causal Minimality
By: Lingjing Kong , Shaoan Xie , Guangyi Chen and more
Potential Business Impact:
Makes AI understand how it makes things.
Deep generative models, while revolutionizing fields like image and text generation, largely operate as opaque black boxes, hindering human understanding, control, and alignment. While methods like sparse autoencoders (SAEs) show remarkable empirical success, they often lack theoretical guarantees, risking subjective insights. Our primary objective is to establish a principled foundation for interpretable generative models. We demonstrate that the principle of causal minimality -- favoring the simplest causal explanation -- can endow the latent representations of diffusion vision and autoregressive language models with clear causal interpretation and robust, component-wise identifiable control. We introduce a novel theoretical framework for hierarchical selection models, where higher-level concepts emerge from the constrained composition of lower-level variables, better capturing the complex dependencies in data generation. Under theoretically derived minimality conditions (manifesting as sparsity or compression constraints), we show that learned representations can be equivalent to the true latent variables of the data-generating process. Empirically, applying these constraints to leading generative models allows us to extract their innate hierarchical concept graphs, offering fresh insights into their internal knowledge organization. Furthermore, these causally grounded concepts serve as levers for fine-grained model steering, paving the way for transparent, reliable systems.
Similar Papers
From Black-box to Causal-box: Towards Building More Interpretable Models
Machine Learning (CS)
Explains how smart programs make decisions.
On the Theoretical Foundation of Sparse Dictionary Learning in Mechanistic Interpretability
Machine Learning (CS)
Unlocks AI's hidden thoughts for better understanding.
Towards Interpretable Deep Generative Models via Causal Representation Learning
Machine Learning (Stat)
Makes AI understand how things cause each other.