Vision-as-Inverse-Graphics Agent via Interleaved Multimodal Reasoning
By: Shaofeng Yin , Jiaxin Ge , Zora Zhiruo Wang and more
Potential Business Impact:
Makes computers understand and change pictures like drawings.
Vision-as-inverse-graphics, the concept of reconstructing an image as an editable graphics program is a long-standing goal of computer vision. Yet even strong VLMs aren't able to achieve this in one-shot as they lack fine-grained spatial and physical grounding capability. Our key insight is that closing this gap requires interleaved multimodal reasoning through iterative execution and verification. Stemming from this, we present VIGA (Vision-as-Inverse-Graphic Agent) that starts from an empty world and reconstructs or edits scenes through a closed-loop write-run-render-compare-revise procedure. To support long-horizon reasoning, VIGA combines (i) a skill library that alternates generator and verifier roles and (ii) an evolving context memory that contains plans, code diffs, and render history. VIGA is task-agnostic as it doesn't require auxiliary modules, covering a wide range of tasks such as 3D reconstruction, multi-step scene editing, 4D physical interaction, and 2D document editing, etc. Empirically, we found VIGA substantially improves one-shot baselines on BlenderGym (35.32%) and SlideBench (117.17%). Moreover, VIGA is also model-agnostic as it doesn't require finetuning, enabling a unified protocol to evaluate heterogeneous foundation VLMs. To better support this protocol, we introduce BlenderBench, a challenging benchmark that stress-tests interleaved multimodal reasoning with graphics engine, where VIGA improves by 124.70%.
Similar Papers
See or Say Graphs: Agent-Driven Scalable Graph Understanding with Vision-Language Models
Artificial Intelligence
Lets computers understand complex pictures and text together.
Scaling Agentic Reinforcement Learning for Tool-Integrated Reasoning in VLMs
Artificial Intelligence
Teaches computers to "think" with pictures and tools.
View-on-Graph: Zero-shot 3D Visual Grounding via Vision-Language Reasoning on Scene Graphs
CV and Pattern Recognition
Helps robots find objects using words.