Color encoding in Latent Space of Stable Diffusion Models
By: Guillem Arias , Ariadna Solà , Martí Armengod and more
Recent advances in diffusion-based generative models have achieved remarkable visual fidelity, yet a detailed understanding of how specific perceptual attributes - such as color and shape - are internally represented remains limited. This work explores how color is encoded in a generative model through a systematic analysis of the latent representations in Stable Diffusion. Through controlled synthetic datasets, principal component analysis (PCA) and similarity metrics, we reveal that color information is encoded along circular, opponent axes predominantly captured in latent channels c_3 and c_4, whereas intensity and shape are primarily represented in channels c_1 and c_2. Our findings indicate that the latent space of Stable Diffusion exhibits an interpretable structure aligned with a efficient coding representation. These insights provide a foundation for future work in model understanding, editing applications, and the design of more disentangled generative frameworks.
Similar Papers
Toward Diffusible High-Dimensional Latent Spaces: A Frequency Perspective
CV and Pattern Recognition
Makes AI image makers create sharper, more detailed pictures.
Color Alignment in Diffusion
CV and Pattern Recognition
Makes AI create pictures with exact colors you want.
Decoupling Complexity from Scale in Latent Diffusion Model
CV and Pattern Recognition
Makes pictures and videos with any detail.