Unifying Vision-Language Latents for Zero-label Image Caption Enhancement
By: Sanghyun Byun , Jung Ick Guack , Mohanad Odema and more
Potential Business Impact:
Helps computers describe pictures without seeing labels.
Vision-language models (VLMs) achieve remarkable performance through large-scale image-text pretraining. However, their reliance on labeled image datasets limits scalability and leaves vast amounts of unlabeled image data underutilized. To address this, we propose Unified Vision-Language Alignment for Zero-Label Enhancement (ViZer), an enhancement training framework that enables zero-label learning in image captioning, providing a practical starting point for broader zero-label adaptation in vision-language tasks. Unlike prior approaches that rely on human or synthetically annotated datasets, ViZer actively aligns vision and language representation features during training, enabling existing VLMs to generate improved captions without requiring text labels or full retraining. We demonstrate ViZer's advantage in qualitative evaluation, as automated caption metrics such as CIDEr and BERTScore often penalize details that are absent in reference captions. Applying ViZer on SmolVLM-Base and Qwen2-VL, we observe consistent qualitative improvements, producing captions that are more grounded and descriptive than their baseline.
Similar Papers
Synthetic Captions for Open-Vocabulary Zero-Shot Segmentation
CV and Pattern Recognition
Lets computers understand pictures better.
Image Recognition with Vision and Language Embeddings of VLMs
CV and Pattern Recognition
Helps computers understand pictures better with words or just sight.
Vision-Language Integration for Zero-Shot Scene Understanding in Real-World Environments
CV and Pattern Recognition
Lets computers understand new pictures without training.