LISA-3D: Lifting Language-Image Segmentation to 3D via Multi-View Consistency
By: Zhongbin Guo , Jiahe Liu , Wenyu Gao and more
Potential Business Impact:
Turns words into 3D objects from pictures.
Text-driven 3D reconstruction demands a mask generator that simultaneously understands open-vocabulary instructions and remains consistent across viewpoints. We present LISA-3D, a two-stage framework that lifts language-image segmentation into 3D by retrofitting the instruction-following model LISA with geometry-aware Low-Rank Adaptation (LoRA) layers and reusing a frozen SAM-3D reconstructor. During training we exploit off-the-shelf RGB-D sequences and their camera poses to build a differentiable reprojection loss that enforces cross-view agreement without requiring any additional 3D-text supervision. The resulting masks are concatenated with RGB images to form RGBA prompts for SAM-3D, which outputs Gaussian splats or textured meshes without retraining. Across ScanRefer and Nr3D, LISA-3D improves language-to-3D accuracy by up to +15.6 points over single-view baselines while adapting only 11.6M parameters. The system is modular, data-efficient, and supports zero-shot deployment on unseen categories, providing a practical recipe for language-guided 3D content creation. Our code will be available at https://github.com/binisalegend/LISA-3D.
Similar Papers
SAM 3D: 3Dfy Anything in Images
CV and Pattern Recognition
Turns flat pictures into 3D objects.
OpenTrack3D: Towards Accurate and Generalizable Open-Vocabulary 3D Instance Segmentation
CV and Pattern Recognition
Lets robots understand and find any object.
Let Language Constrain Geometry: Vision-Language Models as Semantic and Spatial Critics for 3D Generation
CV and Pattern Recognition
Makes 3D pictures match words better.