Score: 1

Learning complete and explainable visual representations from itemized text supervision

Published: December 11, 2025 | arXiv ID: 2512.11141v1

By: Yiwei Lyu , Chenhui Zhao , Soumyanil Banerjee and more

Potential Business Impact:

Helps doctors see hidden problems in medical scans.

Business Areas:
Image Recognition Data and Analytics, Software

Training vision models with language supervision enables general and transferable representations. However, many visual domains, especially non-object-centric domains such as medical imaging and remote sensing, contain itemized text annotations: multiple text items describing distinct and semantically independent findings within a single image. Such supervision differs from standard multi-caption supervision, where captions are redundant or highly overlapping. Here, we introduce ItemizedCLIP, a framework for learning complete and explainable visual representations from itemized text supervision. ItemizedCLIP employs a cross-attention module to produce text item-conditioned visual embeddings and a set of tailored objectives that jointly enforce item independence (distinct regions for distinct items) and representation completeness (coverage of all items). Across four domains with naturally itemized text supervision (brain MRI, head CT, chest CT, remote sensing) and one additional synthetically itemized dataset, ItemizedCLIP achieves substantial improvements in zero-shot performance and fine-grained interpretability over baselines. The resulting ItemizedCLIP representations are semantically grounded, item-differentiable, complete, and visually interpretable. Our code is available at https://github.com/MLNeurosurg/ItemizedCLIP.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
27 pages

Category
Computer Science:
CV and Pattern Recognition