Lang3D-XL: Language Embedded 3D Gaussians for Large-scale Scenes
By: Shai Krakovsky , Gal Fiebelman , Sagie Benaim and more
Potential Business Impact:
Lets computers understand and change 3D worlds with words.
Embedding a language field in a 3D representation enables richer semantic understanding of spatial environments by linking geometry with descriptive meaning. This allows for a more intuitive human-computer interaction, enabling querying or editing scenes using natural language, and could potentially improve tasks like scene retrieval, navigation, and multimodal reasoning. While such capabilities could be transformative, in particular for large-scale scenes, we find that recent feature distillation approaches cannot effectively learn over massive Internet data due to challenges in semantic feature misalignment and inefficiency in memory and runtime. To this end, we propose a novel approach to address these challenges. First, we introduce extremely low-dimensional semantic bottleneck features as part of the underlying 3D Gaussian representation. These are processed by rendering and passing them through a multi-resolution, feature-based, hash encoder. This significantly improves efficiency both in runtime and GPU memory. Second, we introduce an Attenuated Downsampler module and propose several regularizations addressing the semantic misalignment of ground truth 2D features. We evaluate our method on the in-the-wild HolyScenes dataset and demonstrate that it surpasses existing approaches in both performance and efficiency.
Similar Papers
C3G: Learning Compact 3D Representations with 2K Gaussians
CV and Pattern Recognition
Builds detailed 3D worlds from few pictures.
LEGO-SLAM: Language-Embedded Gaussian Optimization SLAM
CV and Pattern Recognition
Robots understand and map places using words.
A Study of the Framework and Real-World Applications of Language Embedding for 3D Scene Understanding
Graphics
Lets computers build 3D worlds from words.