Graph-Guided Dual-Level Augmentation for 3D Scene Segmentation
By: Hongbin Lin , Yifan Jiang , Juangui Xu and more
Potential Business Impact:
Makes 3D maps more accurate for robots.
3D point cloud segmentation aims to assign semantic labels to individual points in a scene for fine-grained spatial understanding. Existing methods typically adopt data augmentation to alleviate the burden of large-scale annotation. However, most augmentation strategies only focus on local transformations or semantic recomposition, lacking the consideration of global structural dependencies within scenes. To address this limitation, we propose a graph-guided data augmentation framework with dual-level constraints for realistic 3D scene synthesis. Our method learns object relationship statistics from real-world data to construct guiding graphs for scene generation. Local-level constraints enforce geometric plausibility and semantic consistency between objects, while global-level constraints maintain the topological structure of the scene by aligning the generated layout with the guiding graph. Extensive experiments on indoor and outdoor datasets demonstrate that our framework generates diverse and high-quality augmented scenes, leading to consistent improvements in point cloud segmentation performance across various models.
Similar Papers
Hierarchical Image-Guided 3D Point Cloud Segmentation in Industrial Scenes via Multi-View Bayesian Fusion
CV and Pattern Recognition
Helps robots understand cluttered factory spaces.
DBGroup: Dual-Branch Point Grouping for Weakly Supervised 3D Instance Segmentation
CV and Pattern Recognition
Helps computers understand 3D objects with less labeling.
Integrating SAM Supervision for 3D Weakly Supervised Point Cloud Segmentation
CV and Pattern Recognition
Helps computers understand 3D shapes with less 3D data.