GMM-COMET: Continual Source-Free Universal Domain Adaptation via a Mean Teacher and Gaussian Mixture Model-Based Pseudo-Labeling
By: Pascal Schlachter, Bin Yang
Potential Business Impact:
Teaches computers to learn from new data streams.
Unsupervised domain adaptation tackles the problem that domain shifts between training and test data impair the performance of neural networks in many real-world applications. Thereby, in realistic scenarios, the source data may no longer be available during adaptation, and the label space of the target domain may differ from the source label space. This setting, known as source-free universal domain adaptation (SF-UniDA), has recently gained attention, but all existing approaches only assume a single domain shift from source to target. In this work, we present the first study on continual SF-UniDA, where the model must adapt sequentially to a stream of multiple different unlabeled target domains. Building upon our previous methods for online SF-UniDA, we combine their key ideas by integrating Gaussian mixture model-based pseudo-labeling within a mean teacher framework for improved stability over long adaptation sequences. Additionally, we introduce consistency losses for further robustness. The resulting method GMM-COMET provides a strong first baseline for continual SF-UniDA and is the only approach in our experiments to consistently improve upon the source-only model across all evaluated scenarios. Our code is available at https://github.com/pascalschlachter/GMM-COMET.
Similar Papers
Collaborative Learning with Multiple Foundation Models for Source-Free Domain Adaptation
CV and Pattern Recognition
Uses multiple AI brains to improve computer vision.
Source-Free Domain Adaptation via Multi-view Contrastive Learning
CV and Pattern Recognition
Teaches computers to learn from new data privately.
Analysis of Pseudo-Labeling for Online Source-Free Universal Domain Adaptation
Machine Learning (CS)
Improves AI learning when new data is different.