Zero-shot image privacy classification with Vision-Language Models
By: Alina Elena Baia, Alessio Xompero, Andrea Cavallaro
Potential Business Impact:
Makes computers better at guessing private pictures.
While specialized learning-based models have historically dominated image privacy prediction, the current literature increasingly favours adopting large Vision-Language Models (VLMs) designed for generic tasks. This trend risks overlooking the performance ceiling set by purpose-built models due to a lack of systematic evaluation. To address this problem, we establish a zero-shot benchmark for image privacy classification, enabling a fair comparison. We evaluate the top-3 open-source VLMs, according to a privacy benchmark, using task-aligned prompts and we contrast their performance, efficiency, and robustness against established vision-only and multi-modal methods. Counter-intuitively, our results show that VLMs, despite their resource-intensive nature in terms of high parameter count and slower inference, currently lag behind specialized, smaller models in privacy prediction accuracy. We also find that VLMs exhibit higher robustness to image perturbations.
Similar Papers
Image Recognition with Vision and Language Embeddings of VLMs
CV and Pattern Recognition
Helps computers understand pictures better with words or just sight.
Visual Language Models as Zero-Shot Deepfake Detectors
CV and Pattern Recognition
Finds fake videos better than old ways.
A Survey of State of the Art Large Vision Language Models: Alignment, Benchmark, Evaluations and Challenges
CV and Pattern Recognition
Lets computers understand pictures and words together.