Open-Set LiDAR Panoptic Segmentation Guided by Uncertainty-Aware Learning
By: Rohit Mohan , Julia Hindel , Florian Drews and more
Potential Business Impact:
Helps self-driving cars spot new things.
Autonomous vehicles that navigate in open-world environments may encounter previously unseen object classes. However, most existing LiDAR panoptic segmentation models rely on closed-set assumptions, failing to detect unknown object instances. In this work, we propose ULOPS, an uncertainty-guided open-set panoptic segmentation framework that leverages Dirichlet-based evidential learning to model predictive uncertainty. Our architecture incorporates separate decoders for semantic segmentation with uncertainty estimation, embedding with prototype association, and instance center prediction. During inference, we leverage uncertainty estimates to identify and segment unknown instances. To strengthen the model's ability to differentiate between known and unknown objects, we introduce three uncertainty-driven loss functions. Uniform Evidence Loss to encourage high uncertainty in unknown regions. Adaptive Uncertainty Separation Loss ensures a consistent difference in uncertainty estimates between known and unknown objects at a global scale. Contrastive Uncertainty Loss refines this separation at the fine-grained level. To evaluate open-set performance, we extend benchmark settings on KITTI-360 and introduce a new open-set evaluation for nuScenes. Extensive experiments demonstrate that ULOPS consistently outperforms existing open-set LiDAR panoptic segmentation methods.
Similar Papers
A Novel Decomposed Feature-Oriented Framework for Open-Set Semantic Segmentation on LiDAR Data
CV and Pattern Recognition
Helps robots see and identify new things.
LOSC: LiDAR Open-voc Segmentation Consolidator
CV and Pattern Recognition
Helps self-driving cars see and understand everything.
Label-Efficient LiDAR Panoptic Segmentation
CV and Pattern Recognition
Teaches robots to understand surroundings with less data.