GaussianVLM: Scene-centric 3D Vision-Language Models using Language-aligned Gaussian Splats for Embodied Reasoning and Beyond
By: Anna-Maria Halacheva , Jan-Nico Zaech , Xi Wang and more
Potential Business Impact:
Helps computers understand 3D scenes from pictures.
As multimodal language models advance, their application to 3D scene understanding is a fast-growing frontier, driving the development of 3D Vision-Language Models (VLMs). Current methods show strong dependence on object detectors, introducing processing bottlenecks and limitations in taxonomic flexibility. To address these limitations, we propose a scene-centric 3D VLM for 3D Gaussian splat scenes that employs language- and task-aware scene representations. Our approach directly embeds rich linguistic features into the 3D scene representation by associating language with each Gaussian primitive, achieving early modality alignment. To process the resulting dense representations, we introduce a dual sparsifier that distills them into compact, task-relevant tokens via task-guided and location-guided pathways, producing sparse, task-aware global and local scene tokens. Notably, we present the first Gaussian splatting-based VLM, leveraging photorealistic 3D representations derived from standard RGB images, demonstrating strong generalization: it improves performance of prior 3D VLM five folds, in out-of-the-domain settings.
Similar Papers
SplatTalk: 3D VQA with Gaussian Splatting
CV and Pattern Recognition
Lets computers understand 3D worlds from pictures.
Agentic 3D Scene Generation with Spatially Contextualized VLMs
CV and Pattern Recognition
Lets computers build and change 3D worlds.
SceneSplat: Gaussian Splatting-based Scene Understanding with Vision-Language Pretraining
CV and Pattern Recognition
Teaches computers to understand 3D spaces from scans.