Score: 0

MulCLIP: A Multi-level Alignment Framework for Enhancing Fine-grained Long-context CLIP

Published: December 8, 2025 | arXiv ID: 2512.07128v1

By: Chau Truong, Hieu Ta Quang, Dung D. Le

Potential Business Impact:

Helps computers understand pictures and long stories.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-language models like CLIP show impressive ability to align images and text, but their training on short, concise captions makes them struggle with lengthy, detailed descriptions. Recent advances mitigate this challenge by leveraging region-proposal information to map visual regions with corresponding sentences from lengthy captions, yet incurring notable deployment costs. We introduce MulCLIP, a novel end-to-end multi-level alignment framework that bridges natural long-text structures with image components. MulCLIP first preserves global contrastive alignment between images and both summary and long captions, while extending positional embeddings for longer text sequences. To further enhance fine-grained understanding, we propose two novel strategies: (1) a token reconstruction alignment over locally calibrated features to strengthen semantic connections between words and image patches, and (2) a subcaption-aggregated patch alignment that automatically extracts and aggregates context-rich patches for each subcaption. Experimental results across diverse benchmarks demonstrate our method consistently improves downstream performance, while ablation studies confirm its multi-scale alignment is the key factor driving better fine-grained capability than region-proposal-assisted approaches, making it particularly suitable for diverse real-world applications.

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition