Score: 0

Contamination Detection for VLMs using Multi-Modal Semantic Perturbation

Published: November 5, 2025 | arXiv ID: 2511.03774v1

By: Jaden Park , Mu Cai , Feng Yao and more

Potential Business Impact:

Finds if AI saw test answers before learning.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent advances in Vision-Language Models (VLMs) have achieved state-of-the-art performance on numerous benchmark tasks. However, the use of internet-scale, often proprietary, pretraining corpora raises a critical concern for both practitioners and users: inflated performance due to test-set leakage. While prior works have proposed mitigation strategies such as decontamination of pretraining data and benchmark redesign for LLMs, the complementary direction of developing detection methods for contaminated VLMs remains underexplored. To address this gap, we deliberately contaminate open-source VLMs on popular benchmarks and show that existing detection approaches either fail outright or exhibit inconsistent behavior. We then propose a novel simple yet effective detection method based on multi-modal semantic perturbation, demonstrating that contaminated models fail to generalize under controlled perturbations. Finally, we validate our approach across multiple realistic contamination strategies, confirming its robustness and effectiveness. The code and perturbed dataset will be released publicly.

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)