Cross-Domain Few-Shot Learning via Multi-View Collaborative Optimization with Vision-Language Models
By: Dexia Chen , Wentao Zhang , Qianjie Zhu and more
Potential Business Impact:
Helps computers understand new pictures better.
Vision-language models (VLMs) pre-trained on natural image and language data, such as CLIP, have exhibited significant potential in few-shot image recognition tasks, leading to development of various efficient transfer learning methods. These methods exploit inherent pre-learned knowledge in VLMs and have achieved strong performance on standard image datasets. However, their effectiveness is often limited when confronted with cross-domain tasks where imaging domains differ from natural images. To address this limitation, we propose Consistency-guided Multi-view Collaborative Optimization (CoMuCo), a novel fine-tuning strategy for VLMs. This strategy employs two functionally complementary expert modules to extract multi-view features, while incorporating prior knowledge-based consistency constraints and information geometry-based consensus mechanisms to enhance the robustness of feature learning. Additionally, a new cross-domain few-shot benchmark is established to help comprehensively evaluate methods on imaging domains distinct from natural images. Extensive empirical evaluations on both existing and newly proposed benchmarks suggest CoMuCo consistently outperforms current methods in few-shot tasks. The code and benchmark will be released.
Similar Papers
VisCoP: Visual Probing for Video Domain Adaptation of Vision Language Models
CV and Pattern Recognition
Helps AI understand new things without forgetting old ones.
Efficient Few-Shot Learning in Remote Sensing: Fusing Vision and Vision-Language Models
CV and Pattern Recognition
Finds planes in pictures better, even blurry ones.
Enhanced Continual Learning of Vision-Language Models with Model Fusion
CV and Pattern Recognition
Keeps AI smart when learning new things.