Uncertainty-Supervised Interpretable and Robust Evidential Segmentation
By: Yuzhu Li , An Sui , Fuping Wu and more
Potential Business Impact:
Makes medical scans more trustworthy and reliable.
Uncertainty estimation has been widely studied in medical image segmentation as a tool to provide reliability, particularly in deep learning approaches. However, previous methods generally lack effective supervision in uncertainty estimation, leading to low interpretability and robustness of the predictions. In this work, we propose a self-supervised approach to guide the learning of uncertainty. Specifically, we introduce three principles about the relationships between the uncertainty and the image gradients around boundaries and noise. Based on these principles, two uncertainty supervision losses are designed. These losses enhance the alignment between model predictions and human interpretation. Accordingly, we introduce novel quantitative metrics for evaluating the interpretability and robustness of uncertainty. Experimental results demonstrate that compared to state-of-the-art approaches, the proposed method can achieve competitive segmentation performance and superior results in out-of-distribution (OOD) scenarios while significantly improving the interpretability and robustness of uncertainty estimation. Code is available via https://github.com/suiannaius/SURE.
Similar Papers
Rethinking Semi-supervised Segmentation Beyond Accuracy: Reliability and Robustness
CV and Pattern Recognition
Makes self-driving cars safer by checking their vision.
Uncertainty-Aware Segmentation Quality Prediction via Deep Learning Bayesian Modeling: Comprehensive Evaluation and Interpretation on Skin Cancer and Liver Segmentation
CV and Pattern Recognition
Checks AI medical images without expert drawings
FARCLUSS: Fuzzy Adaptive Rebalancing and Contrastive Uncertainty Learning for Semi-Supervised Semantic Segmentation
CV and Pattern Recognition
Teaches computers to see better, even in tricky spots.