CoDoL: Conditional Domain Prompt Learning for Out-of-Distribution Generalization
By: Min Zhang , Bo Jiang , Jie Zhou and more
Potential Business Impact:
Helps computers understand pictures better.
Recent advances in pre-training vision-language models (VLMs), e.g., contrastive language-image pre-training (CLIP) methods, have shown great potential in learning out-of-distribution (OOD) representations. Despite showing competitive performance, the prompt-based CLIP methods still suffer from: i) inaccurate text descriptions, which leads to degraded accuracy and robustness, and poses a challenge for zero-shot CLIP methods. ii) limited vision-language embedding alignment, which significantly affects the generalization performance. To tackle the above issues, this paper proposes a novel Conditional Domain prompt Learning (CoDoL) method, which utilizes readily-available domain information to form prompts and improves the vision-language embedding alignment for improving OOD generalization. To capture both instance-specific and domain-specific information, we further propose a lightweight Domain Meta Network (DMN) to generate input-conditional tokens for images in each domain. Extensive experiments on four OOD benchmarks (PACS, VLCS, OfficeHome and DigitDG) validate the effectiveness of our proposed CoDoL in terms of improving the vision-language embedding alignment as well as the out-of-distribution generalization performance.
Similar Papers
Prompt Optimization Meets Subspace Representation Learning for Few-shot Out-of-Distribution Detection
Machine Learning (CS)
AI spots new things it hasn't seen before.
Recent Advances in Out-of-Distribution Detection with CLIP-Like Models: A Survey
CV and Pattern Recognition
Helps AI spot fake or unusual pictures.
Generalizable Prompt Learning of CLIP: A Brief Overview
CV and Pattern Recognition
Teaches computers to understand pictures and words.