Score: 3

VULCAN: Tool-Augmented Multi Agents for Iterative 3D Object Arrangement

Published: December 26, 2025 | arXiv ID: 2512.22351v1

By: Zhengfei Kuang , Rui Lin , Long Zhao and more

BigTech Affiliations: Google Stanford University

Potential Business Impact:

Lets computers build and change 3D worlds with words.

Business Areas:
Image Recognition Data and Analytics, Software

Despite the remarkable progress of Multimodal Large Language Models (MLLMs) in 2D vision-language tasks, their application to complex 3D scene manipulation remains underexplored. In this paper, we bridge this critical gap by tackling three key challenges in 3D object arrangement task using MLLMs. First, to address the weak visual grounding of MLLMs, which struggle to link programmatic edits with precise 3D outcomes, we introduce an MCP-based API. This shifts the interaction from brittle raw code manipulation to more robust, function-level updates. Second, we augment the MLLM's 3D scene understanding with a suite of specialized visual tools to analyze scene state, gather spatial information, and validate action outcomes. This perceptual feedback loop is critical for closing the gap between language-based updates and precise 3D-aware manipulation. Third, to manage the iterative, error-prone updates, we propose a collaborative multi-agent framework with designated roles for planning, execution, and verification. This decomposition allows the system to robustly handle multi-step instructions and recover from intermediate errors. We demonstrate the effectiveness of our approach on a diverse set of 25 complex object arrangement tasks, where it significantly outperforms existing baselines. Website: vulcan-3d.github.io

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
29 pages

Category
Computer Science:
CV and Pattern Recognition