Unsupervised Domain Adaptation via Similarity-based Prototypes for Cross-Modality Segmentation
By: Ziyu Ye , Chen Ju , Chaofan Ma and more
Potential Business Impact:
Helps computers understand new pictures without new training.
Deep learning models have achieved great success on various vision challenges, but a well-trained model would face drastic performance degradation when applied to unseen data. Since the model is sensitive to domain shift, unsupervised domain adaptation attempts to reduce the domain gap and avoid costly annotation of unseen domains. This paper proposes a novel framework for cross-modality segmentation via similarity-based prototypes. In specific, we learn class-wise prototypes within an embedding space, then introduce a similarity constraint to make these prototypes representative for each semantic class while separable from different classes. Moreover, we use dictionaries to store prototypes extracted from different images, which prevents the class-missing problem and enables the contrastive learning of prototypes, and further improves performance. Extensive experiments show that our method achieves better results than other state-of-the-art methods.
Similar Papers
Unified and Semantically Grounded Domain Adaptation for Medical Image Segmentation
CV and Pattern Recognition
Helps doctors see body parts in scans better.
Unified and Semantically Grounded Domain Adaptation for Medical Image Segmentation
CV and Pattern Recognition
Helps doctors see body parts in scans better.
Similarity-Based Domain Adaptation with LLMs
Computation and Language
Teaches computers new tasks without needing old examples.