Decoupling Template Bias in CLIP: Harnessing Empty Prompts for Enhanced Few-Shot Learning
By: Zhenyu Zhang , Guangyao Chen , Yixiong Zou and more
Potential Business Impact:
Fixes AI's image guessing by ignoring misleading words.
The Contrastive Language-Image Pre-Training (CLIP) model excels in few-shot learning by aligning visual and textual representations. Our study shows that template-sample similarity (TSS), defined as the resemblance between a text template and an image sample, introduces bias. This bias leads the model to rely on template proximity rather than true sample-to-category alignment, reducing both accuracy and robustness in classification. We present a framework that uses empty prompts, textual inputs that convey the idea of "emptiness" without category information. These prompts capture unbiased template features and offset TSS bias. The framework employs two stages. During pre-training, empty prompts reveal and reduce template-induced bias within the CLIP encoder. During few-shot fine-tuning, a bias calibration loss enforces correct alignment between images and their categories, ensuring the model focuses on relevant visual cues. Experiments across multiple benchmarks demonstrate that our template correction method significantly reduces performance fluctuations caused by TSS, yielding higher classification accuracy and stronger robustness. The repository of this project is available at https://github.com/zhenyuZ-HUST/Decoupling-Template-Bias-in-CLIP.
Similar Papers
Few-Shot Remote Sensing Image Scene Classification with CLIP and Prompt Learning
CV and Pattern Recognition
Teaches computers to understand satellite pictures better.
LeakyCLIP: Extracting Training Data from CLIP
Cryptography and Security
Steals private pictures from AI's memory.
LeakyCLIP: Extracting Training Data from CLIP
Cryptography and Security
Steals private pictures from AI's memory.