BiPrompt: Bilateral Prompt Optimization for Visual and Textual Debiasing in Vision-Language Models
By: Sunny Gupta, Shounak Das, Amit Sethi
Potential Business Impact:
Makes AI see and understand better, ignoring tricks.
Vision language foundation models such as CLIP exhibit impressive zero-shot generalization yet remain vulnerable to spurious correlations across visual and textual modalities. Existing debiasing approaches often address a single modality either visual or textual leading to partial robustness and unstable adaptation under distribution shifts. We propose a bilateral prompt optimization framework (BiPrompt) that simultaneously mitigates non-causal feature reliance in both modalities during test-time adaptation. On the visual side, it employs structured attention-guided erasure to suppress background activations and enforce orthogonal prediction consistency between causal and spurious regions. On the textual side, it introduces balanced prompt normalization, a learnable re-centering mechanism that aligns class embeddings toward an isotropic semantic space. Together, these modules jointly minimize conditional mutual information between spurious cues and predictions, steering the model toward causal, domain invariant reasoning without retraining or domain supervision. Extensive evaluations on real-world and synthetic bias benchmarks demonstrate consistent improvements in both average and worst-group accuracies over prior test-time debiasing methods, establishing a lightweight yet effective path toward trustworthy and causally grounded vision-language adaptation.
Similar Papers
BMIP: Bi-directional Modality Interaction Prompt Learning for VLM
Machine Learning (CS)
Helps computers understand pictures and words together better.
See Less, See Right: Bi-directional Perceptual Shaping For Multimodal Reasoning
CV and Pattern Recognition
Helps computers see details in pictures better.
Exposing Hidden Biases in Text-to-Image Models via Automated Prompt Search
Machine Learning (CS)
Finds hidden unfairness in AI art.