Hierarchical Self-Supervised Adversarial Training for Robust Vision Models in Histopathology
By: Hashmat Shadab Malik , Shahina Kunhimon , Muzammal Naseer and more
Potential Business Impact:
Makes medical image AI more trustworthy.
Adversarial attacks pose significant challenges for vision models in critical fields like healthcare, where reliability is essential. Although adversarial training has been well studied in natural images, its application to biomedical and microscopy data remains limited. Existing self-supervised adversarial training methods overlook the hierarchical structure of histopathology images, where patient-slide-patch relationships provide valuable discriminative signals. To address this, we propose Hierarchical Self-Supervised Adversarial Training (HSAT), which exploits these properties to craft adversarial examples using multi-level contrastive learning and integrate it into adversarial training for enhanced robustness. We evaluate HSAT on multiclass histopathology dataset OpenSRH and the results show that HSAT outperforms existing methods from both biomedical and natural image domains. HSAT enhances robustness, achieving an average gain of 54.31% in the white-box setting and reducing performance drops to 3-4% in the black-box setting, compared to 25-30% for the baseline. These results set a new benchmark for adversarial training in this domain, paving the way for more robust models. Our Code for training and evaluation is available at https://github.com/HashmatShadab/HSAT.
Similar Papers
Leveraging Adversarial Learning for Pathological Fidelity in Virtual Staining
CV and Pattern Recognition
Makes digital microscope images look like real ones.
DA-SSL: self-supervised domain adaptor to leverage foundational models in turbt histopathology slides
CV and Pattern Recognition
Helps doctors spot bladder cancer better.
A Semantics-Aware Hierarchical Self-Supervised Approach to Classification of Remote Sensing Images
CV and Pattern Recognition
Teaches computers to sort satellite pictures better.