Multi-Modal 3D Mesh Reconstruction from Images and Text
By: Melvin Reka, Tessa Pulli, Markus Vincze
Potential Business Impact:
Builds 3D shapes from a few pictures.
6D object pose estimation for unseen objects is essential in robotics but traditionally relies on trained models that require large datasets, high computational costs, and struggle to generalize. Zero-shot approaches eliminate the need for training but depend on pre-existing 3D object models, which are often impractical to obtain. To address this, we propose a language-guided few-shot 3D reconstruction method, reconstructing a 3D mesh from few input images. In the proposed pipeline, receives a set of input images and a language query. A combination of GroundingDINO and Segment Anything Model outputs segmented masks from which a sparse point cloud is reconstructed with VGGSfM. Subsequently, the mesh is reconstructed with the Gaussian Splatting method SuGAR. In a final cleaning step, artifacts are removed, resulting in the final 3D mesh of the queried object. We evaluate the method in terms of accuracy and quality of the geometry and texture. Furthermore, we study the impact of imaging conditions such as viewing angle, number of input images, and image overlap on 3D object reconstruction quality, efficiency, and computational scalability.
Similar Papers
ZeroScene: A Zero-Shot Framework for 3D Scene Generation from a Single Image and Controllable Texture Editing
Graphics
Turns one picture into a realistic 3D world.
On-the-fly Reconstruction for Large-Scale Novel View Synthesis from Unposed Images
CV and Pattern Recognition
Creates 3D scenes from photos instantly.
A Generative Approach to High Fidelity 3D Reconstruction from Text Data
CV and Pattern Recognition
Turns words into 3D objects.