Score: 2

Text to Robotic Assembly of Multi Component Objects using 3D Generative AI and Vision Language Models

Published: November 4, 2025 | arXiv ID: 2511.02162v1

By: Alexander Htet Kyaw , Richa Gupta , Dhruv Shah and more

BigTech Affiliations: Massachusetts Institute of Technology Google

Potential Business Impact:

Robots build complex objects from simple text ideas.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Advances in 3D generative AI have enabled the creation of physical objects from text prompts, but challenges remain in creating objects involving multiple component types. We present a pipeline that integrates 3D generative AI with vision-language models (VLMs) to enable the robotic assembly of multi-component objects from natural language. Our method leverages VLMs for zero-shot, multi-modal reasoning about geometry and functionality to decompose AI-generated meshes into multi-component 3D models using predefined structural and panel components. We demonstrate that a VLM is capable of determining which mesh regions need panel components in addition to structural components, based on object functionality. Evaluation across test objects shows that users preferred the VLM-generated assignments 90.6% of the time, compared to 59.4% for rule-based and 2.5% for random assignment. Lastly, the system allows users to refine component assignments through conversational feedback, enabling greater human control and agency in making physical objects with generative AI and robotics.

Country of Origin
🇺🇸 United States

Page Count
19 pages

Category
Computer Science:
Robotics