Score: 1

VisCoP: Visual Probing for Video Domain Adaptation of Vision Language Models

Published: October 15, 2025 | arXiv ID: 2510.13808v1

By: Dominick Reilly , Manish Kumar Govind , Le Xue and more

Potential Business Impact:

Helps AI understand new things without forgetting old ones.

Business Areas:
Image Recognition Data and Analytics, Software

Large Vision-Language Models (VLMs) excel at general visual reasoning tasks but exhibit sharp performance degradation when applied to novel domains with substantial distribution shifts from pretraining data. Existing domain adaptation approaches finetune different VLM components, but this often results in limited domain-specific feature learning or catastrophic forgetting of prior capabilities. To address these issues, we introduce Vision Contextualized Probing (VisCoP), which augments the VLM's vision encoder with a compact set of learnable visual probes. These probes enable efficient domain-specific adaptation with minimal modification to pretrained parameters. We evaluate VisCoP across three challenging domain adaptation settings-cross-view (exocentric to egocentric), cross-modal (RGB to depth), and cross-task (human understanding to robot control). Experiments show that VisCoP consistently outperforms existing adaptation strategies, achieving superior performance on target domains while effectively retaining source-domain knowledge.

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
CV and Pattern Recognition