Optimizing Vision-Language Consistency via Cross-Layer Regional Attention Alignment
By: Yifan Wang , Hongfeng Ai , Quangao Liu and more
Potential Business Impact:
Helps computers understand pictures and words better.
Vision Language Models (VLMs) face challenges in effectively coordinating diverse attention mechanisms for cross-modal embedding learning, leading to mismatched attention and suboptimal performance. We propose Consistent Cross-layer Regional Alignment (CCRA), which introduces Layer-Patch-wise Cross Attention (LPWCA) to capture fine-grained regional-semantic correlations by jointly weighting patch and layer-wise embedding, and Progressive Attention Integration (PAI) that systematically coordinates LPWCA, layer-wise, and patch-wise attention mechanisms in sequence. This progressive design ensures consistency from semantic to regional levels while preventing attention drift and maximizing individual attention benefits. Experimental results on ten diverse vision-language benchmarks demonstrate that our CCRA-enhanced LLaVA-v1.5-7B model achieves state-of-the-art performance, outperforming all baseline methods with only 3.55M additional parameters, while providing enhanced interpretability through more regionally focused and semantically aligned attention patterns.
Similar Papers
AddressVLM: Cross-view Alignment Tuning for Image Address Localization using Large Vision-Language Models
CV and Pattern Recognition
Helps phones find exact street addresses from pictures.
Focusing by Contrastive Attention: Enhancing VLMs' Visual Reasoning
CV and Pattern Recognition
Improves computer vision by focusing on important details.
MAP: Mitigating Hallucinations in Large Vision-Language Models with Map-Level Attention Processing
CV and Pattern Recognition
Makes AI pictures match real things better.