Training-Free Out-Of-Distribution Segmentation With Foundation Models
By: Laith Nayal , Hadi Salloum , Ahmad Taha and more
Potential Business Impact:
Helps self-driving cars spot new, unseen dangers.
Detecting unknown objects in semantic segmentation is crucial for safety-critical applications such as autonomous driving. Large vision foundation models, including DINOv2, InternImage, and CLIP, have advanced visual representation learning by providing rich features that generalize well across diverse tasks. While their strength in closed-set semantic tasks is established, their capability to detect out-of-distribution (OoD) regions in semantic segmentation remains underexplored. In this work, we investigate whether foundation models fine-tuned on segmentation datasets can inherently distinguish in-distribution (ID) from OoD regions without any outlier supervision. We propose a simple, training-free approach that utilizes features from the InternImage backbone and applies K-Means clustering alongside confidence thresholding on raw decoder logits to identify OoD clusters. Our method achieves 50.02 Average Precision on the RoadAnomaly benchmark and 48.77 on the benchmark of ADE-OoD with InternImage-L, surpassing several supervised and unsupervised baselines. These results suggest a promising direction for generic OoD segmentation methods that require minimal assumptions or additional data.
Similar Papers
From Pixel to Mask: A Survey of Out-of-Distribution Segmentation
CV and Pattern Recognition
Helps self-driving cars spot weird things.
Revisiting Out-of-Distribution Detection in Real-time Object Detection: From Benchmark Pitfalls to a New Mitigation Paradigm
CV and Pattern Recognition
Teaches computers to ignore fake objects.
SupLID: Geometrical Guidance for Out-of-Distribution Detection in Semantic Segmentation
CV and Pattern Recognition
Helps self-driving cars spot weird things.