Score: 1

Exploring Compositionality in Vision Transformers using Wavelet Representations

Published: December 30, 2025 | arXiv ID: 2512.24438v1

By: Akshad Shyam Purushottamdas , Pranav K Nayak , Divya Mehul Rajparia and more

Potential Business Impact:

Helps computers understand pictures by breaking them down.

Business Areas:
Image Recognition Data and Analytics, Software

While insights into the workings of the transformer model have largely emerged by analysing their behaviour on language tasks, this work investigates the representations learnt by the Vision Transformer (ViT) encoder through the lens of compositionality. We introduce a framework, analogous to prior work on measuring compositionality in representation learning, to test for compositionality in the ViT encoder. Crucial to drawing this analogy is the Discrete Wavelet Transform (DWT), which is a simple yet effective tool for obtaining input-dependent primitives in the vision setting. By examining the ability of composed representations to reproduce original image representations, we empirically test the extent to which compositionality is respected in the representation space. Our findings show that primitives from a one-level DWT decomposition produce encoder representations that approximately compose in latent space, offering a new perspective on how ViTs structure information.

Country of Origin
🇮🇳 India

Page Count
9 pages

Category
Computer Science:
CV and Pattern Recognition