Reasoning in the Dark: Interleaved Vision-Text Reasoning in Latent Space
By: Chao Chen , Zhixin Ma , Yongqi Li and more
Potential Business Impact:
Makes AI understand pictures and words faster.
Multimodal reasoning aims to enhance the capabilities of MLLMs by incorporating intermediate reasoning steps before reaching the final answer. It has evolved from text-only reasoning to the integration of visual information, enabling the thought process to be conveyed through both images and text. Despite its effectiveness, current multimodal reasoning methods depend on explicit reasoning steps that require labor-intensive vision-text annotations and inherently introduce significant inference latency. To address these issues, we introduce multimodal latent reasoning with the advantages of multimodal representation, reduced annotation, and inference efficiency. To facilicate it, we propose Interleaved Vision-Text Latent Reasoning (IVT-LR), which injects both visual and textual information in the reasoning process within the latent space. Specifically, IVT-LR represents each reasoning step by combining two implicit parts: latent text (the hidden states from the previous step) and latent vision (a set of selected image embeddings). We further introduce a progressive multi-stage training strategy to enable MLLMs to perform the above multimodal latent reasoning steps. Experiments on M3CoT and ScienceQA demonstrate that our IVT-LR method achieves an average performance increase of 5.45% in accuracy, while simultaneously achieving a speed increase of over 5 times compared to existing approaches. Code available at https://github.com/FYYDCC/IVT-LR.
Similar Papers
Monet: Reasoning in Latent Visual Space Beyond Images and Language
CV and Pattern Recognition
Lets computers "think" with pictures, not just words.
Diagnosing Visual Reasoning: Challenges, Insights, and a Path Forward
CV and Pattern Recognition
Fixes AI seeing things that aren't there.
Rethinking Visual Information Processing in Multimodal LLMs
CV and Pattern Recognition
Lets computers understand pictures and words together better.