Multi-modality Anomaly Segmentation on the Road
By: Heng Gao , Zhuolin He , Shoumeng Qiu and more
Potential Business Impact:
Helps self-driving cars spot hidden dangers.
Semantic segmentation allows autonomous driving cars to understand the surroundings of the vehicle comprehensively. However, it is also crucial for the model to detect obstacles that may jeopardize the safety of autonomous driving systems. Based on our experiments, we find that current uni-modal anomaly segmentation frameworks tend to produce high anomaly scores for non-anomalous regions in images. Motivated by this empirical finding, we develop a multi-modal uncertainty-based anomaly segmentation framework, named MMRAS+, for autonomous driving systems. MMRAS+ effectively reduces the high anomaly outputs of non-anomalous classes by introducing text-modal using the CLIP text encoder. Indeed, MMRAS+ is the first multi-modal anomaly segmentation solution for autonomous driving. Moreover, we develop an ensemble module to further boost the anomaly segmentation performance. Experiments on RoadAnomaly, SMIYC, and Fishyscapes validation datasets demonstrate the superior performance of our method. The code is available in https://github.com/HengGao12/MMRAS_plus.
Similar Papers
Robust Anomaly Detection through Multi-Modal Autoencoder Fusion for Small Vehicle Damage Detection
Machine Learning (CS)
Finds car dents and damage instantly.
Segmenting Objectiveness and Task-awareness Unknown Region for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars spot unexpected road dangers.
Benchmarking Multi-modal Semantic Segmentation under Sensor Failures: Missing and Noisy Modality Robustness
CV and Pattern Recognition
Tests how well AI sees with missing or bad info.