Point Cloud Based Scene Segmentation: A Survey
By: Dan Halperin, Niklas Eisl
Potential Business Impact:
Helps self-driving cars understand roads better.
Autonomous driving is a safety-critical application, and it is therefore a top priority that the accompanying assistance systems are able to provide precise information about the surrounding environment of the vehicle. Tasks such as 3D Object Detection deliver an insufficiently detailed understanding of the surrounding scene because they only predict a bounding box for foreground objects. In contrast, 3D Semantic Segmentation provides richer and denser information about the environment by assigning a label to each individual point, which is of paramount importance for autonomous driving tasks, such as navigation or lane changes. To inspire future research, in this review paper, we provide a comprehensive overview of the current state-of-the-art methods in the field of Point Cloud Semantic Segmentation for autonomous driving. We categorize the approaches into projection-based, 3D-based and hybrid methods. Moreover, we discuss the most important and commonly used datasets for this task and also emphasize the importance of synthetic data to support research when real-world data is limited. We further present the results of the different methods and compare them with respect to their segmentation accuracy and efficiency.
Similar Papers
3D Can Be Explored In 2D: Pseudo-Label Generation for LiDAR Point Clouds Using Sensor-Intensity-Based 2D Semantic Segmentation
CV and Pattern Recognition
Teaches self-driving cars to see without 3D maps.
Learning-based 3D Reconstruction in Autonomous Driving: A Comprehensive Survey
CV and Pattern Recognition
Helps self-driving cars see and understand the world.
Range-Edit: Semantic Mask Guided Outdoor LiDAR Scene Editing
CV and Pattern Recognition
Creates realistic driving scenes for self-driving cars.