Score: 2

Optimizing Vision-Language Consistency via Cross-Layer Regional Attention Alignment

Published: July 31, 2025 | arXiv ID: 2508.00945v1

By: Yifan Wang , Hongfeng Ai , Quangao Liu and more

Potential Business Impact:

Helps computers understand pictures and words better.

Vision Language Models (VLMs) face challenges in effectively coordinating diverse attention mechanisms for cross-modal embedding learning, leading to mismatched attention and suboptimal performance. We propose Consistent Cross-layer Regional Alignment (CCRA), which introduces Layer-Patch-wise Cross Attention (LPWCA) to capture fine-grained regional-semantic correlations by jointly weighting patch and layer-wise embedding, and Progressive Attention Integration (PAI) that systematically coordinates LPWCA, layer-wise, and patch-wise attention mechanisms in sequence. This progressive design ensures consistency from semantic to regional levels while preventing attention drift and maximizing individual attention benefits. Experimental results on ten diverse vision-language benchmarks demonstrate that our CCRA-enhanced LLaVA-v1.5-7B model achieves state-of-the-art performance, outperforming all baseline methods with only 3.55M additional parameters, while providing enhanced interpretability through more regionally focused and semantically aligned attention patterns.

Country of Origin
🇨🇦 🇭🇰 Hong Kong, Canada

Page Count
19 pages

Category
Computer Science:
CV and Pattern Recognition