SAM 3D: 3Dfy Anything in Images
By: SAM 3D Team , Xingyu Chen , Fu-Jen Chu and more
Potential Business Impact:
Turns flat pictures into 3D objects.
We present SAM 3D, a generative model for visually grounded 3D object reconstruction, predicting geometry, texture, and layout from a single image. SAM 3D excels in natural images, where occlusion and scene clutter are common and visual recognition cues from context play a larger role. We achieve this with a human- and model-in-the-loop pipeline for annotating object shape, texture, and pose, providing visually grounded 3D reconstruction data at unprecedented scale. We learn from this data in a modern, multi-stage training framework that combines synthetic pretraining with real-world alignment, breaking the 3D "data barrier". We obtain significant gains over recent work, with at least a 5:1 win rate in human preference tests on real-world objects and scenes. We will release our code and model weights, an online demo, and a new challenging benchmark for in-the-wild 3D object reconstruction.
Similar Papers
Ref-SAM3D: Bridging SAM3D with Text for Reference 3D Reconstruction
CV and Pattern Recognition
Makes 3D models from text and one picture.
GeoSAM2: Unleashing the Power of SAM2 for 3D Part Segmentation
CV and Pattern Recognition
Adds tiny details to 3D shapes quickly.
GEN3D: Generating Domain-Free 3D Scenes from a Single Image
CV and Pattern Recognition
Creates realistic 3D worlds from one picture.