Range-Edit: Semantic Mask Guided Outdoor LiDAR Scene Editing
By: Suchetan G. Uppur, Hemant Kumar, Vaibhav Kumar
Potential Business Impact:
Creates realistic driving scenes for self-driving cars.
Training autonomous driving and navigation systems requires large and diverse point cloud datasets that capture complex edge case scenarios from various dynamic urban settings. Acquiring such diverse scenarios from real-world point cloud data, especially for critical edge cases, is challenging, which restricts system generalization and robustness. Current methods rely on simulating point cloud data within handcrafted 3D virtual environments, which is time-consuming, computationally expensive, and often fails to fully capture the complexity of real-world scenes. To address some of these issues, this research proposes a novel approach that addresses the problem discussed by editing real-world LiDAR scans using semantic mask-based guidance to generate novel synthetic LiDAR point clouds. We incorporate range image projection and semantic mask conditioning to achieve diffusion-based generation. Point clouds are transformed to 2D range view images, which are used as an intermediate representation to enable semantic editing using convex hull-based semantic masks. These masks guide the generation process by providing information on the dimensions, orientations, and locations of objects in the real environment, ensuring geometric consistency and realism. This approach demonstrates high-quality LiDAR point cloud generation, capable of producing complex edge cases and dynamic scenes, as validated on the KITTI-360 dataset. This offers a cost-effective and scalable solution for generating diverse LiDAR data, a step toward improving the robustness of autonomous driving systems.
Similar Papers
LiDARDraft: Generating LiDAR Point Cloud from Versatile Inputs
CV and Pattern Recognition
Creates self-driving worlds from drawings or words.
FLARES: Fast and Accurate LiDAR Multi-Range Semantic Segmentation
CV and Pattern Recognition
Helps self-driving cars see better and faster.
INDOOR-LiDAR: Bridging Simulation and Reality for Robot-Centric 360 degree Indoor LiDAR Perception -- A Robot-Centric Hybrid Dataset
Robotics
Helps robots see and understand indoor spaces better.