Synthetic Captions for Open-Vocabulary Zero-Shot Segmentation
By: Tim Lebailly , Vijay Veerabadran , Satwik Kottur and more
Potential Business Impact:
Lets computers understand pictures better.
Generative vision-language models (VLMs) exhibit strong high-level image understanding but lack spatially dense alignment between vision and language modalities, as our findings indicate. Orthogonal to advancements in generative VLMs, another line of research has focused on representation learning for vision-language alignment, targeting zero-shot inference for dense tasks like segmentation. In this work, we bridge these two directions by densely aligning images with synthetic descriptions generated by VLMs. Synthetic captions are inexpensive, scalable, and easy to generate, making them an excellent source of high-level semantic understanding for dense alignment methods. Empirically, our approach outperforms prior work on standard zero-shot open-vocabulary segmentation benchmarks/datasets, while also being more data-efficient.
Similar Papers
Unifying Vision-Language Latents for Zero-label Image Caption Enhancement
CV and Pattern Recognition
Helps computers describe pictures without seeing labels.
Image Recognition with Vision and Language Embeddings of VLMs
CV and Pattern Recognition
Helps computers understand pictures better with words or just sight.
Vision-Language Integration for Zero-Shot Scene Understanding in Real-World Environments
CV and Pattern Recognition
Lets computers understand new pictures without training.