Subsecond 3D Mesh Generation for Robot Manipulation
By: Qian Wang , Omar Abdellall , Tony Gao and more
3D meshes are a fundamental representation widely used in computer science and engineering. In robotics, they are particularly valuable because they capture objects in a form that aligns directly with how robots interact with the physical world, enabling core capabilities such as predicting stable grasps, detecting collisions, and simulating dynamics. Although automatic 3D mesh generation methods have shown promising progress in recent years, potentially offering a path toward real-time robot perception, two critical challenges remain. First, generating high-fidelity meshes is prohibitively slow for real-time use, often requiring tens of seconds per object. Second, mesh generation by itself is insufficient. In robotics, a mesh must be contextually grounded, i.e., correctly segmented from the scene and registered with the proper scale and pose. Additionally, unless these contextual grounding steps remain efficient, they simply introduce new bottlenecks. In this work, we introduce an end-to-end system that addresses these challenges, producing a high-quality, contextually grounded 3D mesh from a single RGB-D image in under one second. Our pipeline integrates open-vocabulary object segmentation, accelerated diffusion-based mesh generation, and robust point cloud registration, each optimized for both speed and accuracy. We demonstrate its effectiveness in a real-world manipulation task, showing that it enables meshes to be used as a practical, on-demand representation for robotics perception and planning.
Similar Papers
What Is The Best 3D Scene Representation for Robotics? From Geometric to Foundation Models
Robotics
Helps robots understand and move in the real world.
Real2Edit2Real: Generating Robotic Demonstrations via a 3D Control Interface
Robotics
Makes robots learn new tasks with less practice.
Recent Advance in 3D Object and Scene Generation: A Survey
Graphics
Creates 3D worlds from simple instructions.