LaViT: Aligning Latent Visual Thoughts for Multi-modal Reasoning
By: Linquan Wu , Tianxiang Jiang , Yifei Dong and more
Current multimodal latent reasoning often relies on external supervision (e.g., auxiliary images), ignoring intrinsic visual attention dynamics. In this work, we identify a critical Perception Gap in distillation: student models frequently mimic a teacher's textual output while attending to fundamentally divergent visual regions, effectively relying on language priors rather than grounded perception. To bridge this, we propose LaViT, a framework that aligns latent visual thoughts rather than static embeddings. LaViT compels the student to autoregressively reconstruct the teacher's visual semantics and attention trajectories prior to text generation, employing a curriculum sensory gating mechanism to prevent shortcut learning. Extensive experiments show that LaViT significantly enhances visual grounding, achieving up to +16.9% gains on complex reasoning tasks and enabling a compact 3B model to outperform larger open-source variants and proprietary models like GPT-4o.
Similar Papers
Rethinking Visual Information Processing in Multimodal LLMs
CV and Pattern Recognition
Lets computers understand pictures and words together better.
Monet: Reasoning in Latent Visual Space Beyond Images and Language
CV and Pattern Recognition
Lets computers "think" with pictures, not just words.
Reasoning in the Dark: Interleaved Vision-Text Reasoning in Latent Space
CV and Pattern Recognition
Makes AI understand pictures and words faster.