An Empirical Analysis of VLM-based OOD Detection: Mechanisms, Advantages, and Sensitivity
By: Yuxiao Lee , Xiaofeng Cao , Wei Ye and more
Potential Business Impact:
AI sees new things better, but words matter.
Vision-Language Models (VLMs), such as CLIP, have demonstrated remarkable zero-shot out-of-distribution (OOD) detection capabilities, vital for reliable AI systems. Despite this promising capability, a comprehensive understanding of (1) why they work so effectively, (2) what advantages do they have over single-modal methods, and (3) how is their behavioral robustness -- remains notably incomplete within the research community. This paper presents a systematic empirical analysis of VLM-based OOD detection using in-distribution (ID) and OOD prompts. (1) Mechanisms: We systematically characterize and formalize key operational properties within the VLM embedding space that facilitate zero-shot OOD detection. (2) Advantages: We empirically quantify the superiority of these models over established single-modal approaches, attributing this distinct advantage to the VLM's capacity to leverage rich semantic novelty. (3) Sensitivity: We uncovers a significant and previously under-explored asymmetry in their robustness profile: while exhibiting resilience to common image noise, these VLM-based methods are highly sensitive to prompt phrasing. Our findings contribute a more structured understanding of the strengths and critical vulnerabilities inherent in VLM-based OOD detection, offering crucial, empirically-grounded guidance for developing more robust and reliable future designs.
Similar Papers
Delving into Out-of-Distribution Detection with Medical Vision-Language Models
CV and Pattern Recognition
Helps AI spot strange medical images.
Recent Advances in Out-of-Distribution Detection with CLIP-Like Models: A Survey
CV and Pattern Recognition
Helps AI spot fake or unusual pictures.
A Review of 3D Object Detection with Vision-Language Models
CV and Pattern Recognition
Lets computers see and name objects in 3D.