ObjectGS: Object-aware Scene Reconstruction and Scene Understanding via Gaussian Splatting
By: Ruijie Zhu , Mulin Yu , Linning Xu and more
Potential Business Impact:
Lets computers understand and edit objects in 3D scenes.
3D Gaussian Splatting is renowned for its high-fidelity reconstructions and real-time novel view synthesis, yet its lack of semantic understanding limits object-level perception. In this work, we propose ObjectGS, an object-aware framework that unifies 3D scene reconstruction with semantic understanding. Instead of treating the scene as a unified whole, ObjectGS models individual objects as local anchors that generate neural Gaussians and share object IDs, enabling precise object-level reconstruction. During training, we dynamically grow or prune these anchors and optimize their features, while a one-hot ID encoding with a classification loss enforces clear semantic constraints. We show through extensive experiments that ObjectGS not only outperforms state-of-the-art methods on open-vocabulary and panoptic segmentation tasks, but also integrates seamlessly with applications like mesh extraction and scene editing. Project page: https://ruijiezhu94.github.io/ObjectGS_page
Similar Papers
LabelGS: Label-Aware 3D Gaussian Splatting for 3D Scene Segmentation
CV and Pattern Recognition
Lets computers understand and label objects in 3D.
CoRe-GS: Coarse-to-Refined Gaussian Splatting with Semantic Object Focus
CV and Pattern Recognition
Drones build 3D maps of important things faster.
OpenGS-SLAM: Open-Set Dense Semantic SLAM with 3D Gaussian Splatting for Object-Level Scene Understanding
CV and Pattern Recognition
Lets robots understand any object in a room.