The Pervasive Blind Spot: Benchmarking VLM Inference Risks on Everyday Personal Videos
By: Shuning Zhang , Zhaoxin Li , Changxi Wen and more
Potential Business Impact:
Makes AI guess private details from your videos.
The proliferation of Vision-Language Models (VLMs) introduces profound privacy risks from personal videos. This paper addresses the critical yet unexplored inferential privacy threat, the risk of inferring sensitive personal attributes over the data. To address this gap, we crowdsourced a dataset of 508 everyday personal videos from 58 individuals. We then conducted a benchmark study evaluating VLM inference capabilities against human performance. Our findings reveal three critical insights: (1) VLMs possess superhuman inferential capabilities, significantly outperforming human evaluators, leveraging a shift from object recognition to behavioral inference from temporal streams. (2) Inferential risk is strongly correlated with factors such as video characteristics and prompting strategies. (3) VLM-driven explanation towards the inference is unreliable, as we revealed a disconnect between the model-generated explanations and evidential impact, identifying ubiquitous objects as misleading confounders.
Similar Papers
Through Their Eyes: User Perceptions on Sensitive Attribute Inference of Social Media Videos by Visual Language Models
Human-Computer Interaction
AI can guess private things about you from photos.
Zero-shot image privacy classification with Vision-Language Models
CV and Pattern Recognition
Makes computers better at guessing private pictures.
Bias in the Picture: Benchmarking VLMs with Social-Cue News Images and LLM-as-Judge Assessment
CV and Pattern Recognition
Finds and fixes unfairness in AI that sees and reads.