Score: 3

Connecting Domains and Contrasting Samples: A Ladder for Domain Generalization

Published: October 19, 2025 | arXiv ID: 2510.16704v1

By: Tianxin Wei , Yifan Chen , Xinrui He and more

Potential Business Impact:

Helps computers learn from different data types.

Business Areas:
A/B Testing Data and Analytics

Distribution shifts between training and testing samples frequently occur in practice and impede model generalization performance. This crucial challenge thereby motivates studies on domain generalization (DG), which aim to predict the label on unseen target domain data by solely using data from source domains. It is intuitive to conceive the class-separated representations learned in contrastive learning (CL) are able to improve DG, while the reality is quite the opposite: users observe directly applying CL deteriorates the performance. We analyze the phenomenon with the insights from CL theory and discover lack of intra-class connectivity in the DG setting causes the deficiency. We thus propose a new paradigm, domain-connecting contrastive learning (DCCL), to enhance the conceptual connectivity across domains and obtain generalizable representations for DG. On the data side, more aggressive data augmentation and cross-domain positive samples are introduced to improve intra-class connectivity. On the model side, to better embed the unseen test domains, we propose model anchoring to exploit the intra-class connectivity in pre-trained representations and complement the anchoring with generative transformation loss. Extensive experiments on five standard DG benchmarks are performed. The results verify that DCCL outperforms state-of-the-art baselines even without domain supervision. The detailed model implementation and the code are provided through https://github.com/weitianxin/DCCL

Country of Origin
πŸ‡­πŸ‡° πŸ‡ΊπŸ‡Έ United States, Hong Kong

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
CV and Pattern Recognition