Understanding Hardness of Vision-Language Compositionality from A Token-level Causal Lens
By: Ziliang Chen , Tianang Xiao , Jusheng Zhang and more
Potential Business Impact:
Teaches computers to understand image details better.
Contrastive Language-Image Pre-training (CLIP) delivers strong cross modal generalization by aligning images and texts in a shared embedding space, yet it persistently fails at compositional reasoning over objects, attributes, and relations often behaving like a bag-of-words matcher. Prior causal accounts typically model text as a single vector, obscuring token-level structure and leaving core phenomena-such as prompt sensitivity and failures on hard negatives unexplained. We address this gap with a token-aware causal representation learning (CRL) framework grounded in a sequential, language-token SCM. Our theory extends block identifiability to tokenized text, proving that CLIP's contrastive objective can recover the modal-invariant latent variable under both sentence-level and token-level SCMs. Crucially, token granularity yields the first principled explanation of CLIP's compositional brittleness: composition nonidentifiability. We show the existence of pseudo-optimal text encoders that achieve perfect modal-invariant alignment yet are provably insensitive to SWAP, REPLACE, and ADD operations over atomic concepts, thereby failing to distinguish correct captions from hard negatives despite optimizing the same training objective as true-optimal encoders. The analysis further links language-side nonidentifiability to visual-side failures via the modality gap and shows how iterated composition operators compound hardness, motivating improved negative mining strategies.
Similar Papers
Enhancing Compositional Reasoning in CLIP via Reconstruction and Alignment of Text Descriptions
CV and Pattern Recognition
Helps computers understand how words relate in pictures.
Prompt-Based Continual Compositional Zero-Shot Learning
CV and Pattern Recognition
Teaches AI to learn new things without forgetting old ones.
Logic Unseen: Revealing the Logical Blindspots of Vision-Language Models
CV and Pattern Recognition
Teaches computers to understand logic in pictures.