Text to Robotic Assembly of Multi Component Objects using 3D Generative AI and Vision Language Models
By: Alexander Htet Kyaw , Richa Gupta , Dhruv Shah and more
Potential Business Impact:
Robots build complex objects from simple text ideas.
Advances in 3D generative AI have enabled the creation of physical objects from text prompts, but challenges remain in creating objects involving multiple component types. We present a pipeline that integrates 3D generative AI with vision-language models (VLMs) to enable the robotic assembly of multi-component objects from natural language. Our method leverages VLMs for zero-shot, multi-modal reasoning about geometry and functionality to decompose AI-generated meshes into multi-component 3D models using predefined structural and panel components. We demonstrate that a VLM is capable of determining which mesh regions need panel components in addition to structural components, based on object functionality. Evaluation across test objects shows that users preferred the VLM-generated assignments 90.6% of the time, compared to 59.4% for rule-based and 2.5% for random assignment. Lastly, the system allows users to refine component assignments through conversational feedback, enabling greater human control and agency in making physical objects with generative AI and robotics.
Similar Papers
Text to Robotic Assembly of Multi Component Objects using 3D Generative AI and Vision Language Models
Robotics
Builds complex toys from simple words.
Text to Robotic Assembly of Multi Component Objects using 3D Generative AI and Vision Language Models
Robotics
Robots build objects from text descriptions.
VLM-driven Skill Selection for Robotic Assembly Tasks
Robotics
Robot builds things by watching and listening.