Emergence and Evolution of Interpretable Concepts in Diffusion Models
By: Berk Tinaz, Zalan Fabian, Mahdi Soltanolkotabi
Potential Business Impact:
Unlocks how AI makes pictures to control them.
Diffusion models have become the go-to method for text-to-image generation, producing high-quality images from noise through a process called reverse diffusion. Understanding the dynamics of the reverse diffusion process is crucial in steering the generation and achieving high sample quality. However, the inner workings of diffusion models is still largely a mystery due to their black-box nature and complex, multi-step generation process. Mechanistic Interpretability (MI) techniques, such as Sparse Autoencoders (SAEs), aim at uncovering the operating principles of models through granular analysis of their internal representations. These MI techniques have been successful in understanding and steering the behavior of large language models at scale. However, the great potential of SAEs has not yet been applied toward gaining insight into the intricate generative process of diffusion models. In this work, we leverage the SAE framework to probe the inner workings of a popular text-to-image diffusion model, and uncover a variety of human-interpretable concepts in its activations. Interestingly, we find that even before the first reverse diffusion step is completed, the final composition of the scene can be predicted surprisingly well by looking at the spatial distribution of activated concepts. Moreover, going beyond correlational analysis, we show that the discovered concepts have a causal effect on the model output and can be leveraged to steer the generative process. We design intervention techniques aimed at manipulating image composition and style, and demonstrate that (1) in early stages of diffusion image composition can be effectively controlled, (2) in the middle stages of diffusion image composition is finalized, however stylistic interventions are effective, and (3) in the final stages of diffusion only minor textural details are subject to change.
Similar Papers
Dissecting and Mitigating Diffusion Bias via Mechanistic Interpretability
CV and Pattern Recognition
Fixes AI art to be fair and unbiased.
Measuring Semantic Information Production in Generative Diffusion Models
Machine Learning (Stat)
Shows when AI decides what picture to make.
DesignDiffusion: High-Quality Text-to-Design Image Generation with Diffusion Models
CV and Pattern Recognition
Creates pictures from words for designs.