Interleaved Latent Visual Reasoning with Selective Perceptual Modeling
By: Shuai Dong , Siyuan Wang , Xingyu Liu and more
Potential Business Impact:
Makes AI understand pictures and words better, faster.
Interleaved reasoning paradigms enhance Multimodal Large Language Models (MLLMs) with visual feedback but are hindered by the prohibitive computational cost of repeatedly re-encoding pixel-dense images. A promising alternative, latent visual reasoning, circumvents this bottleneck yet currently forces a critical trade-off: methods either sacrifice precise perceptual modeling by over-compressing features or fail to model dynamic problems due to static, non-interleaved structures. We introduce Interleaved Latent Visual Reasoning (ILVR), a framework that unifies dynamic state evolution with precise perceptual modeling. ILVR interleaves textual generation with latent visual representations that act as specific, evolving cues for subsequent reasoning. To enable this, we employ a self-supervision strategy where a Momentum Teacher Model selectively distills relevant features from helper images into sparse supervision targets. This adaptive selection mechanism guides the model to autonomously generate context-aware visual signals. Extensive experiments on multimodal reasoning benchmarks demonstrate that ILVR significantly outperforms existing approaches, effectively bridging the gap between fine-grained perception and sequential multimodal reasoning.
Similar Papers
Reasoning in the Dark: Interleaved Vision-Text Reasoning in Latent Space
CV and Pattern Recognition
Makes AI understand pictures and words faster.
Monet: Reasoning in Latent Visual Space Beyond Images and Language
CV and Pattern Recognition
Lets computers "think" with pictures, not just words.
PeRL: Permutation-Enhanced Reinforcement Learning for Interleaved Vision-Language Reasoning
CV and Pattern Recognition
Teaches computers to understand pictures better together.