See Less, See Right: Bi-directional Perceptual Shaping For Multimodal Reasoning
By: Shuoshuo Zhang , Yizhen Zhang , Jingjing Fu and more
Potential Business Impact:
Helps computers see details in pictures better.
Large vision-language models (VLMs) often benefit from intermediate visual cues, either injected via external tools or generated as latent visual tokens during reasoning, but these mechanisms still overlook fine-grained visual evidence (e.g., polylines in charts), generalize poorly across domains, and incur high inference-time cost. In this paper, we propose Bi-directional Perceptual Shaping (BiPS), which transforms question-conditioned masked views into bidirectional where-to-look signals that shape perception during training. BiPS first applies a KL-consistency constraint between the original image and an evidence-preserving view that keeps only question-relevant regions, encouraging coarse but complete coverage of supporting pixels. It then applies a KL-separation constraint between the original and an evidence-ablated view where critical pixels are masked so the image no longer supports the original answer, discouraging text-only shortcuts (i.e., answering from text alone) and enforcing fine-grained visual reliance. Across eight benchmarks, BiPS boosts Qwen2.5-VL-7B by 8.2% on average and shows strong out-of-domain generalization to unseen datasets and image types.
Similar Papers
Building Reasonable Inference for Vision-Language Models in Blind Image Quality Assessment
CV and Pattern Recognition
Makes AI judge picture quality more like people.
Feedback-Driven Vision-Language Alignment with Minimal Human Supervision
CV and Pattern Recognition
Makes AI understand pictures better with less work.
Language-Guided Invariance Probing of Vision-Language Models
CV and Pattern Recognition
Tests if AI understands words that mean the same thing.